2025-07-06 19:20:10.630188 | Job console starting 2025-07-06 19:20:10.642485 | Updating git repos 2025-07-06 19:20:10.745175 | Cloning repos into workspace 2025-07-06 19:20:11.175426 | Restoring repo states 2025-07-06 19:20:11.263666 | Merging changes 2025-07-06 19:20:11.263692 | Checking out repos 2025-07-06 19:20:11.822138 | Preparing playbooks 2025-07-06 19:20:12.929420 | Running Ansible setup 2025-07-06 19:20:19.666988 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-06 19:20:21.160760 | 2025-07-06 19:20:21.160898 | PLAY [Base pre] 2025-07-06 19:20:21.195905 | 2025-07-06 19:20:21.196034 | TASK [Setup log path fact] 2025-07-06 19:20:21.228756 | orchestrator | ok 2025-07-06 19:20:21.268109 | 2025-07-06 19:20:21.269641 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-06 19:20:21.313045 | orchestrator | ok 2025-07-06 19:20:21.324790 | 2025-07-06 19:20:21.324889 | TASK [emit-job-header : Print job information] 2025-07-06 19:20:21.364427 | # Job Information 2025-07-06 19:20:21.364595 | Ansible Version: 2.16.14 2025-07-06 19:20:21.364630 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-07-06 19:20:21.364664 | Pipeline: post 2025-07-06 19:20:21.364688 | Executor: 521e9411259a 2025-07-06 19:20:21.364709 | Triggered by: https://github.com/osism/testbed/commit/6b1483ea11ea6b4bb31b3b1a68fb04362e76bb9a 2025-07-06 19:20:21.364732 | Event ID: 37822248-5a9e-11f0-96d6-95cb46b247a9 2025-07-06 19:20:21.371131 | 2025-07-06 19:20:21.371228 | LOOP [emit-job-header : Print node information] 2025-07-06 19:20:21.502592 | orchestrator | ok: 2025-07-06 19:20:21.502809 | orchestrator | # Node Information 2025-07-06 19:20:21.502876 | orchestrator | Inventory Hostname: orchestrator 2025-07-06 19:20:21.502909 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-06 19:20:21.502937 | orchestrator | Username: zuul-testbed03 2025-07-06 19:20:21.503215 | orchestrator | Distro: Debian 12.11 2025-07-06 19:20:21.503324 | orchestrator | Provider: static-testbed 2025-07-06 19:20:21.503357 | orchestrator | Region: 2025-07-06 19:20:21.503385 | orchestrator | Label: testbed-orchestrator 2025-07-06 19:20:21.503410 | orchestrator | Product Name: OpenStack Nova 2025-07-06 19:20:21.503435 | orchestrator | Interface IP: 81.163.193.140 2025-07-06 19:20:21.521187 | 2025-07-06 19:20:21.521301 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-06 19:20:22.022959 | orchestrator -> localhost | changed 2025-07-06 19:20:22.031108 | 2025-07-06 19:20:22.031210 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-06 19:20:23.403000 | orchestrator -> localhost | changed 2025-07-06 19:20:23.428498 | 2025-07-06 19:20:23.428808 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-06 19:20:24.004716 | orchestrator -> localhost | ok 2025-07-06 19:20:24.011594 | 2025-07-06 19:20:24.011700 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-06 19:20:24.050241 | orchestrator | ok 2025-07-06 19:20:24.084062 | orchestrator | included: /var/lib/zuul/builds/dac217f43f7d42b29ccad2ebb7bfad75/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-06 19:20:24.100028 | 2025-07-06 19:20:24.100135 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-06 19:20:26.302511 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-06 19:20:26.302772 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/dac217f43f7d42b29ccad2ebb7bfad75/work/dac217f43f7d42b29ccad2ebb7bfad75_id_rsa 2025-07-06 19:20:26.302812 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/dac217f43f7d42b29ccad2ebb7bfad75/work/dac217f43f7d42b29ccad2ebb7bfad75_id_rsa.pub 2025-07-06 19:20:26.302855 | orchestrator -> localhost | The key fingerprint is: 2025-07-06 19:20:26.302958 | orchestrator -> localhost | SHA256:VmL8QUL5JIu2mD667zs7x9EJGsVWiPlRpBv6JXZW3Qw zuul-build-sshkey 2025-07-06 19:20:26.302983 | orchestrator -> localhost | The key's randomart image is: 2025-07-06 19:20:26.303019 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-06 19:20:26.303041 | orchestrator -> localhost | | + **.. E | 2025-07-06 19:20:26.303063 | orchestrator -> localhost | | o *ooo.. + | 2025-07-06 19:20:26.303084 | orchestrator -> localhost | | +oo+=+ . o | 2025-07-06 19:20:26.303104 | orchestrator -> localhost | | ..=+.=.. | 2025-07-06 19:20:26.303123 | orchestrator -> localhost | | .*++S.. | 2025-07-06 19:20:26.303147 | orchestrator -> localhost | | +oo*o | 2025-07-06 19:20:26.303169 | orchestrator -> localhost | | . ... | 2025-07-06 19:20:26.303189 | orchestrator -> localhost | | = o | 2025-07-06 19:20:26.303210 | orchestrator -> localhost | | o==B | 2025-07-06 19:20:26.303231 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-06 19:20:26.303288 | orchestrator -> localhost | ok: Runtime: 0:00:01.270508 2025-07-06 19:20:26.311072 | 2025-07-06 19:20:26.311205 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-06 19:20:26.367994 | orchestrator | ok 2025-07-06 19:20:26.378286 | orchestrator | included: /var/lib/zuul/builds/dac217f43f7d42b29ccad2ebb7bfad75/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-06 19:20:26.399150 | 2025-07-06 19:20:26.399292 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-06 19:20:26.445127 | orchestrator | skipping: Conditional result was False 2025-07-06 19:20:26.463836 | 2025-07-06 19:20:26.463982 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-06 19:20:27.454977 | orchestrator | changed 2025-07-06 19:20:27.468051 | 2025-07-06 19:20:27.468196 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-06 19:20:27.770130 | orchestrator | ok 2025-07-06 19:20:27.776764 | 2025-07-06 19:20:27.776885 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-06 19:20:28.209274 | orchestrator | ok 2025-07-06 19:20:28.215673 | 2025-07-06 19:20:28.215807 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-06 19:20:28.649905 | orchestrator | ok 2025-07-06 19:20:28.664413 | 2025-07-06 19:20:28.664522 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-06 19:20:28.688114 | orchestrator | skipping: Conditional result was False 2025-07-06 19:20:28.696595 | 2025-07-06 19:20:28.696712 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-06 19:20:29.503862 | orchestrator -> localhost | changed 2025-07-06 19:20:29.535201 | 2025-07-06 19:20:29.535888 | TASK [add-build-sshkey : Add back temp key] 2025-07-06 19:20:30.099328 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/dac217f43f7d42b29ccad2ebb7bfad75/work/dac217f43f7d42b29ccad2ebb7bfad75_id_rsa (zuul-build-sshkey) 2025-07-06 19:20:30.099605 | orchestrator -> localhost | ok: Runtime: 0:00:00.021486 2025-07-06 19:20:30.107630 | 2025-07-06 19:20:30.107728 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-06 19:20:30.602540 | orchestrator | ok 2025-07-06 19:20:30.610576 | 2025-07-06 19:20:30.610686 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-06 19:20:30.663615 | orchestrator | skipping: Conditional result was False 2025-07-06 19:20:30.717459 | 2025-07-06 19:20:30.718007 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-06 19:20:31.182473 | orchestrator | ok 2025-07-06 19:20:31.220235 | 2025-07-06 19:20:31.220366 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-06 19:20:31.278258 | orchestrator | ok 2025-07-06 19:20:31.311232 | 2025-07-06 19:20:31.311348 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-06 19:20:31.751148 | orchestrator -> localhost | ok 2025-07-06 19:20:31.758118 | 2025-07-06 19:20:31.758207 | TASK [validate-host : Collect information about the host] 2025-07-06 19:20:33.234499 | orchestrator | ok 2025-07-06 19:20:33.292545 | 2025-07-06 19:20:33.292720 | TASK [validate-host : Sanitize hostname] 2025-07-06 19:20:33.475059 | orchestrator | ok 2025-07-06 19:20:33.481189 | 2025-07-06 19:20:33.481325 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-06 19:20:35.180412 | orchestrator -> localhost | changed 2025-07-06 19:20:35.198477 | 2025-07-06 19:20:35.198635 | TASK [validate-host : Collect information about zuul worker] 2025-07-06 19:20:35.922639 | orchestrator | ok 2025-07-06 19:20:35.934003 | 2025-07-06 19:20:35.934151 | TASK [validate-host : Write out all zuul information for each host] 2025-07-06 19:20:36.788616 | orchestrator -> localhost | changed 2025-07-06 19:20:36.802307 | 2025-07-06 19:20:36.802455 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-06 19:20:37.096711 | orchestrator | ok 2025-07-06 19:20:37.104105 | 2025-07-06 19:20:37.104229 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-06 19:21:10.615678 | orchestrator | changed: 2025-07-06 19:21:10.616000 | orchestrator | .d..t...... src/ 2025-07-06 19:21:10.616060 | orchestrator | .d..t...... src/github.com/ 2025-07-06 19:21:10.616102 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-06 19:21:10.616139 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-06 19:21:10.616174 | orchestrator | RedHat.yml 2025-07-06 19:21:10.630289 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-06 19:21:10.630307 | orchestrator | RedHat.yml 2025-07-06 19:21:10.630359 | orchestrator | = 2.2.0"... 2025-07-06 19:21:26.948884 | orchestrator | 19:21:26.948 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-07-06 19:21:26.983265 | orchestrator | 19:21:26.982 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-07-06 19:21:28.307834 | orchestrator | 19:21:28.307 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.2.0... 2025-07-06 19:21:29.657690 | orchestrator | 19:21:29.657 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.2.0 (signed, key ID 4F80527A391BEFD2) 2025-07-06 19:21:30.816395 | orchestrator | 19:21:30.816 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-06 19:21:31.814833 | orchestrator | 19:21:31.814 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-06 19:21:32.670875 | orchestrator | 19:21:32.670 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-06 19:21:34.026222 | orchestrator | 19:21:34.026 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-06 19:21:34.026305 | orchestrator | 19:21:34.026 STDOUT terraform: Providers are signed by their developers. 2025-07-06 19:21:34.026314 | orchestrator | 19:21:34.026 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-06 19:21:34.026319 | orchestrator | 19:21:34.026 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-06 19:21:34.026346 | orchestrator | 19:21:34.026 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-06 19:21:34.026402 | orchestrator | 19:21:34.026 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-06 19:21:34.026503 | orchestrator | 19:21:34.026 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-06 19:21:34.026517 | orchestrator | 19:21:34.026 STDOUT terraform: you run "tofu init" in the future. 2025-07-06 19:21:34.026664 | orchestrator | 19:21:34.026 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-06 19:21:34.026715 | orchestrator | 19:21:34.026 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-06 19:21:34.026762 | orchestrator | 19:21:34.026 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-06 19:21:34.026769 | orchestrator | 19:21:34.026 STDOUT terraform: should now work. 2025-07-06 19:21:34.026821 | orchestrator | 19:21:34.026 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-06 19:21:34.026871 | orchestrator | 19:21:34.026 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-06 19:21:34.026914 | orchestrator | 19:21:34.026 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-06 19:21:34.184605 | orchestrator | 19:21:34.184 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-07-06 19:21:34.184681 | orchestrator | 19:21:34.184 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-06 19:21:34.369854 | orchestrator | 19:21:34.369 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-06 19:21:34.369911 | orchestrator | 19:21:34.369 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-06 19:21:34.369918 | orchestrator | 19:21:34.369 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-06 19:21:34.369923 | orchestrator | 19:21:34.369 STDOUT terraform: for this configuration. 2025-07-06 19:21:34.511631 | orchestrator | 19:21:34.509 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-07-06 19:21:34.511743 | orchestrator | 19:21:34.509 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-06 19:21:34.605293 | orchestrator | 19:21:34.604 STDOUT terraform: ci.auto.tfvars 2025-07-06 19:21:34.613656 | orchestrator | 19:21:34.611 STDOUT terraform: default_custom.tf 2025-07-06 19:21:34.738567 | orchestrator | 19:21:34.738 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed03/terraform` instead. 2025-07-06 19:21:35.690106 | orchestrator | 19:21:35.689 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-06 19:21:36.214498 | orchestrator | 19:21:36.214 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-06 19:21:36.456147 | orchestrator | 19:21:36.455 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-06 19:21:36.456211 | orchestrator | 19:21:36.456 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-06 19:21:36.456239 | orchestrator | 19:21:36.456 STDOUT terraform:  + create 2025-07-06 19:21:36.456287 | orchestrator | 19:21:36.456 STDOUT terraform:  <= read (data resources) 2025-07-06 19:21:36.456358 | orchestrator | 19:21:36.456 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-06 19:21:36.456663 | orchestrator | 19:21:36.456 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-06 19:21:36.456736 | orchestrator | 19:21:36.456 STDOUT terraform:  # (config refers to values not yet known) 2025-07-06 19:21:36.456809 | orchestrator | 19:21:36.456 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-06 19:21:36.456878 | orchestrator | 19:21:36.456 STDOUT terraform:  + checksum = (known after apply) 2025-07-06 19:21:36.456947 | orchestrator | 19:21:36.456 STDOUT terraform:  + created_at = (known after apply) 2025-07-06 19:21:36.457018 | orchestrator | 19:21:36.456 STDOUT terraform:  + file = (known after apply) 2025-07-06 19:21:36.457068 | orchestrator | 19:21:36.457 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.457124 | orchestrator | 19:21:36.457 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.457188 | orchestrator | 19:21:36.457 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-06 19:21:36.457260 | orchestrator | 19:21:36.457 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-06 19:21:36.457303 | orchestrator | 19:21:36.457 STDOUT terraform:  + most_recent = true 2025-07-06 19:21:36.457367 | orchestrator | 19:21:36.457 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:36.457416 | orchestrator | 19:21:36.457 STDOUT terraform:  + protected = (known after apply) 2025-07-06 19:21:36.457499 | orchestrator | 19:21:36.457 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.457564 | orchestrator | 19:21:36.457 STDOUT terraform:  + schema = (known after apply) 2025-07-06 19:21:36.457609 | orchestrator | 19:21:36.457 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-06 19:21:36.457667 | orchestrator | 19:21:36.457 STDOUT terraform:  + tags = (known after apply) 2025-07-06 19:21:36.457721 | orchestrator | 19:21:36.457 STDOUT terraform:  + updated_at = (known after apply) 2025-07-06 19:21:36.457741 | orchestrator | 19:21:36.457 STDOUT terraform:  } 2025-07-06 19:21:36.457864 | orchestrator | 19:21:36.457 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-06 19:21:36.457923 | orchestrator | 19:21:36.457 STDOUT terraform:  # (config refers to values not yet known) 2025-07-06 19:21:36.457999 | orchestrator | 19:21:36.457 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-06 19:21:36.458095 | orchestrator | 19:21:36.457 STDOUT terraform:  + checksum = (known after apply) 2025-07-06 19:21:36.458142 | orchestrator | 19:21:36.458 STDOUT terraform:  + created_at = (known after apply) 2025-07-06 19:21:36.458210 | orchestrator | 19:21:36.458 STDOUT terraform:  + file = (known after apply) 2025-07-06 19:21:36.458300 | orchestrator | 19:21:36.458 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.458401 | orchestrator | 19:21:36.458 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.458518 | orchestrator | 19:21:36.458 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-06 19:21:36.458583 | orchestrator | 19:21:36.458 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-06 19:21:36.458612 | orchestrator | 19:21:36.458 STDOUT terraform:  + most_recent = true 2025-07-06 19:21:36.458676 | orchestrator | 19:21:36.458 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:36.458722 | orchestrator | 19:21:36.458 STDOUT terraform:  + protected = (known after apply) 2025-07-06 19:21:36.458781 | orchestrator | 19:21:36.458 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.458848 | orchestrator | 19:21:36.458 STDOUT terraform:  + schema = (known after apply) 2025-07-06 19:21:36.458896 | orchestrator | 19:21:36.458 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-06 19:21:36.458960 | orchestrator | 19:21:36.458 STDOUT terraform:  + tags = (known after apply) 2025-07-06 19:21:36.459008 | orchestrator | 19:21:36.458 STDOUT terraform:  + updated_at = (known after apply) 2025-07-06 19:21:36.459032 | orchestrator | 19:21:36.459 STDOUT terraform:  } 2025-07-06 19:21:36.459081 | orchestrator | 19:21:36.459 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-06 19:21:36.459142 | orchestrator | 19:21:36.459 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-06 19:21:36.459189 | orchestrator | 19:21:36.459 STDOUT terraform:  + content = (known after apply) 2025-07-06 19:21:36.459248 | orchestrator | 19:21:36.459 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-06 19:21:36.459304 | orchestrator | 19:21:36.459 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-06 19:21:36.459374 | orchestrator | 19:21:36.459 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-06 19:21:36.459424 | orchestrator | 19:21:36.459 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-06 19:21:36.459507 | orchestrator | 19:21:36.459 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-06 19:21:36.459566 | orchestrator | 19:21:36.459 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-06 19:21:36.459605 | orchestrator | 19:21:36.459 STDOUT terraform:  + directory_permission = "0777" 2025-07-06 19:21:36.459646 | orchestrator | 19:21:36.459 STDOUT terraform:  + file_permission = "0644" 2025-07-06 19:21:36.459715 | orchestrator | 19:21:36.459 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-06 19:21:36.459766 | orchestrator | 19:21:36.459 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.459786 | orchestrator | 19:21:36.459 STDOUT terraform:  } 2025-07-06 19:21:36.459833 | orchestrator | 19:21:36.459 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-06 19:21:36.459884 | orchestrator | 19:21:36.459 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-06 19:21:36.459953 | orchestrator | 19:21:36.459 STDOUT terraform:  + content = (known after apply) 2025-07-06 19:21:36.460011 | orchestrator | 19:21:36.459 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-06 19:21:36.460068 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-06 19:21:36.460137 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-06 19:21:36.460187 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-06 19:21:36.460250 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-06 19:21:36.460316 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-06 19:21:36.460341 | orchestrator | 19:21:36.460 STDOUT terraform:  + directory_permission = "0777" 2025-07-06 19:21:36.460381 | orchestrator | 19:21:36.460 STDOUT terraform:  + file_permission = "0644" 2025-07-06 19:21:36.460434 | orchestrator | 19:21:36.460 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-06 19:21:36.460514 | orchestrator | 19:21:36.460 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.460535 | orchestrator | 19:21:36.460 STDOUT terraform:  } 2025-07-06 19:21:36.460584 | orchestrator | 19:21:36.460 STDOUT terraform:  # local_file.inventory will be created 2025-07-06 19:21:36.460617 | orchestrator | 19:21:36.460 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-06 19:21:36.460683 | orchestrator | 19:21:36.460 STDOUT terraform:  + content = (known after apply) 2025-07-06 19:21:36.460733 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-06 19:21:36.460794 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-06 19:21:36.460852 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-06 19:21:36.460910 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-06 19:21:36.460967 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-06 19:21:36.461025 | orchestrator | 19:21:36.460 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-06 19:21:36.461079 | orchestrator | 19:21:36.461 STDOUT terraform:  + directory_permission = "0777" 2025-07-06 19:21:36.461113 | orchestrator | 19:21:36.461 STDOUT terraform:  + file_permission = "0644" 2025-07-06 19:21:36.461163 | orchestrator | 19:21:36.461 STDOUT terraform:  + filename = "inventory.ci" 2025-07-06 19:21:36.461224 | orchestrator | 19:21:36.461 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.461245 | orchestrator | 19:21:36.461 STDOUT terraform:  } 2025-07-06 19:21:36.461306 | orchestrator | 19:21:36.461 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-06 19:21:36.461345 | orchestrator | 19:21:36.461 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-06 19:21:36.461396 | orchestrator | 19:21:36.461 STDOUT terraform:  + content = (sensitive value) 2025-07-06 19:21:36.461491 | orchestrator | 19:21:36.461 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-06 19:21:36.461552 | orchestrator | 19:21:36.461 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-06 19:21:36.461611 | orchestrator | 19:21:36.461 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-06 19:21:36.461679 | orchestrator | 19:21:36.461 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-06 19:21:36.461727 | orchestrator | 19:21:36.461 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-06 19:21:36.461786 | orchestrator | 19:21:36.461 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-06 19:21:36.461820 | orchestrator | 19:21:36.461 STDOUT terraform:  + directory_permission = "0700" 2025-07-06 19:21:36.461857 | orchestrator | 19:21:36.461 STDOUT terraform:  + file_permission = "0600" 2025-07-06 19:21:36.461903 | orchestrator | 19:21:36.461 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-06 19:21:36.461967 | orchestrator | 19:21:36.461 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.461978 | orchestrator | 19:21:36.461 STDOUT terraform:  } 2025-07-06 19:21:36.462042 | orchestrator | 19:21:36.461 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-06 19:21:36.462100 | orchestrator | 19:21:36.462 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-06 19:21:36.462134 | orchestrator | 19:21:36.462 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.462158 | orchestrator | 19:21:36.462 STDOUT terraform:  } 2025-07-06 19:21:36.462235 | orchestrator | 19:21:36.462 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-06 19:21:36.462310 | orchestrator | 19:21:36.462 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-06 19:21:36.462368 | orchestrator | 19:21:36.462 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.462407 | orchestrator | 19:21:36.462 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.462477 | orchestrator | 19:21:36.462 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.462556 | orchestrator | 19:21:36.462 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.462611 | orchestrator | 19:21:36.462 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.462681 | orchestrator | 19:21:36.462 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-06 19:21:36.462738 | orchestrator | 19:21:36.462 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.462772 | orchestrator | 19:21:36.462 STDOUT terraform:  + size = 80 2025-07-06 19:21:36.462810 | orchestrator | 19:21:36.462 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.462846 | orchestrator | 19:21:36.462 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.462867 | orchestrator | 19:21:36.462 STDOUT terraform:  } 2025-07-06 19:21:36.462941 | orchestrator | 19:21:36.462 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-06 19:21:36.463011 | orchestrator | 19:21:36.462 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:36.463069 | orchestrator | 19:21:36.463 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.463104 | orchestrator | 19:21:36.463 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.463158 | orchestrator | 19:21:36.463 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.463212 | orchestrator | 19:21:36.463 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.463268 | orchestrator | 19:21:36.463 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.463336 | orchestrator | 19:21:36.463 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-06 19:21:36.463390 | orchestrator | 19:21:36.463 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.463422 | orchestrator | 19:21:36.463 STDOUT terraform:  + size = 80 2025-07-06 19:21:36.463491 | orchestrator | 19:21:36.463 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.463535 | orchestrator | 19:21:36.463 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.463558 | orchestrator | 19:21:36.463 STDOUT terraform:  } 2025-07-06 19:21:36.463697 | orchestrator | 19:21:36.463 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-06 19:21:36.463770 | orchestrator | 19:21:36.463 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:36.463891 | orchestrator | 19:21:36.463 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.463930 | orchestrator | 19:21:36.463 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.463993 | orchestrator | 19:21:36.463 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.464048 | orchestrator | 19:21:36.463 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.464104 | orchestrator | 19:21:36.464 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.464182 | orchestrator | 19:21:36.464 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-06 19:21:36.464244 | orchestrator | 19:21:36.464 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.464307 | orchestrator | 19:21:36.464 STDOUT terraform:  + size = 80 2025-07-06 19:21:36.464344 | orchestrator | 19:21:36.464 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.464382 | orchestrator | 19:21:36.464 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.464403 | orchestrator | 19:21:36.464 STDOUT terraform:  } 2025-07-06 19:21:36.464495 | orchestrator | 19:21:36.464 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-06 19:21:36.464565 | orchestrator | 19:21:36.464 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:36.464620 | orchestrator | 19:21:36.464 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.464657 | orchestrator | 19:21:36.464 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.464712 | orchestrator | 19:21:36.464 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.464770 | orchestrator | 19:21:36.464 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.464825 | orchestrator | 19:21:36.464 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.464893 | orchestrator | 19:21:36.464 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-06 19:21:36.464948 | orchestrator | 19:21:36.464 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.464990 | orchestrator | 19:21:36.464 STDOUT terraform:  + size = 80 2025-07-06 19:21:36.465029 | orchestrator | 19:21:36.464 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.465073 | orchestrator | 19:21:36.465 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.465104 | orchestrator | 19:21:36.465 STDOUT terraform:  } 2025-07-06 19:21:36.465211 | orchestrator | 19:21:36.465 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-06 19:21:36.465282 | orchestrator | 19:21:36.465 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:36.465340 | orchestrator | 19:21:36.465 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.465375 | orchestrator | 19:21:36.465 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.465430 | orchestrator | 19:21:36.465 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.465519 | orchestrator | 19:21:36.465 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.465576 | orchestrator | 19:21:36.465 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.465643 | orchestrator | 19:21:36.465 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-06 19:21:36.465698 | orchestrator | 19:21:36.465 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.465735 | orchestrator | 19:21:36.465 STDOUT terraform:  + size = 80 2025-07-06 19:21:36.465770 | orchestrator | 19:21:36.465 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.465810 | orchestrator | 19:21:36.465 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.465834 | orchestrator | 19:21:36.465 STDOUT terraform:  } 2025-07-06 19:21:36.465902 | orchestrator | 19:21:36.465 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-06 19:21:36.465972 | orchestrator | 19:21:36.465 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:36.466067 | orchestrator | 19:21:36.465 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.466106 | orchestrator | 19:21:36.466 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.466163 | orchestrator | 19:21:36.466 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.466217 | orchestrator | 19:21:36.466 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.466274 | orchestrator | 19:21:36.466 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.466352 | orchestrator | 19:21:36.466 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-06 19:21:36.466396 | orchestrator | 19:21:36.466 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.466428 | orchestrator | 19:21:36.466 STDOUT terraform:  + size = 80 2025-07-06 19:21:36.466510 | orchestrator | 19:21:36.466 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.466547 | orchestrator | 19:21:36.466 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.466569 | orchestrator | 19:21:36.466 STDOUT terraform:  } 2025-07-06 19:21:36.466639 | orchestrator | 19:21:36.466 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-06 19:21:36.466720 | orchestrator | 19:21:36.466 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-06 19:21:36.466791 | orchestrator | 19:21:36.466 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.466832 | orchestrator | 19:21:36.466 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.466895 | orchestrator | 19:21:36.466 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.466943 | orchestrator | 19:21:36.466 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.466996 | orchestrator | 19:21:36.466 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.467059 | orchestrator | 19:21:36.466 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-06 19:21:36.467107 | orchestrator | 19:21:36.467 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.467135 | orchestrator | 19:21:36.467 STDOUT terraform:  + size = 80 2025-07-06 19:21:36.467167 | orchestrator | 19:21:36.467 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.467206 | orchestrator | 19:21:36.467 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.467225 | orchestrator | 19:21:36.467 STDOUT terraform:  } 2025-07-06 19:21:36.467286 | orchestrator | 19:21:36.467 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-06 19:21:36.467343 | orchestrator | 19:21:36.467 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:36.467389 | orchestrator | 19:21:36.467 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.467424 | orchestrator | 19:21:36.467 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.467498 | orchestrator | 19:21:36.467 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.467545 | orchestrator | 19:21:36.467 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.467596 | orchestrator | 19:21:36.467 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-06 19:21:36.467647 | orchestrator | 19:21:36.467 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.467675 | orchestrator | 19:21:36.467 STDOUT terraform:  + size = 20 2025-07-06 19:21:36.467707 | orchestrator | 19:21:36.467 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.467739 | orchestrator | 19:21:36.467 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.467757 | orchestrator | 19:21:36.467 STDOUT terraform:  } 2025-07-06 19:21:36.467817 | orchestrator | 19:21:36.467 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-06 19:21:36.467878 | orchestrator | 19:21:36.467 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:36.467923 | orchestrator | 19:21:36.467 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.467956 | orchestrator | 19:21:36.467 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.468004 | orchestrator | 19:21:36.467 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.468051 | orchestrator | 19:21:36.468 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.468103 | orchestrator | 19:21:36.468 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-06 19:21:36.468152 | orchestrator | 19:21:36.468 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.468181 | orchestrator | 19:21:36.468 STDOUT terraform:  + size = 20 2025-07-06 19:21:36.468218 | orchestrator | 19:21:36.468 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.468248 | orchestrator | 19:21:36.468 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.468266 | orchestrator | 19:21:36.468 STDOUT terraform:  } 2025-07-06 19:21:36.468325 | orchestrator | 19:21:36.468 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-06 19:21:36.468383 | orchestrator | 19:21:36.468 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:36.468433 | orchestrator | 19:21:36.468 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.468502 | orchestrator | 19:21:36.468 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.468525 | orchestrator | 19:21:36.468 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.468572 | orchestrator | 19:21:36.468 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.468624 | orchestrator | 19:21:36.468 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-06 19:21:36.468673 | orchestrator | 19:21:36.468 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.468703 | orchestrator | 19:21:36.468 STDOUT terraform:  + size = 20 2025-07-06 19:21:36.468735 | orchestrator | 19:21:36.468 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.468767 | orchestrator | 19:21:36.468 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.468784 | orchestrator | 19:21:36.468 STDOUT terraform:  } 2025-07-06 19:21:36.468843 | orchestrator | 19:21:36.468 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-06 19:21:36.468903 | orchestrator | 19:21:36.468 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:36.468951 | orchestrator | 19:21:36.468 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.468982 | orchestrator | 19:21:36.468 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.469031 | orchestrator | 19:21:36.468 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.469079 | orchestrator | 19:21:36.469 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.469131 | orchestrator | 19:21:36.469 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-06 19:21:36.469180 | orchestrator | 19:21:36.469 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.469209 | orchestrator | 19:21:36.469 STDOUT terraform:  + size = 20 2025-07-06 19:21:36.469242 | orchestrator | 19:21:36.469 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.469275 | orchestrator | 19:21:36.469 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.469292 | orchestrator | 19:21:36.469 STDOUT terraform:  } 2025-07-06 19:21:36.469353 | orchestrator | 19:21:36.469 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-06 19:21:36.469411 | orchestrator | 19:21:36.469 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:36.469471 | orchestrator | 19:21:36.469 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.469503 | orchestrator | 19:21:36.469 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.469551 | orchestrator | 19:21:36.469 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.469599 | orchestrator | 19:21:36.469 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.469657 | orchestrator | 19:21:36.469 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-06 19:21:36.469701 | orchestrator | 19:21:36.469 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.469728 | orchestrator | 19:21:36.469 STDOUT terraform:  + size = 20 2025-07-06 19:21:36.469762 | orchestrator | 19:21:36.469 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.469793 | orchestrator | 19:21:36.469 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.469812 | orchestrator | 19:21:36.469 STDOUT terraform:  } 2025-07-06 19:21:36.469891 | orchestrator | 19:21:36.469 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-06 19:21:36.469926 | orchestrator | 19:21:36.469 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:36.469977 | orchestrator | 19:21:36.469 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.470009 | orchestrator | 19:21:36.469 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.470089 | orchestrator | 19:21:36.470 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.470140 | orchestrator | 19:21:36.470 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.470196 | orchestrator | 19:21:36.470 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-06 19:21:36.470241 | orchestrator | 19:21:36.470 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.470295 | orchestrator | 19:21:36.470 STDOUT terraform:  + size = 20 2025-07-06 19:21:36.470517 | orchestrator | 19:21:36.470 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.474120 | orchestrator | 19:21:36.470 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.474182 | orchestrator | 19:21:36.470 STDOUT terraform:  } 2025-07-06 19:21:36.474189 | orchestrator | 19:21:36.470 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-06 19:21:36.474195 | orchestrator | 19:21:36.470 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:36.474199 | orchestrator | 19:21:36.470 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.474203 | orchestrator | 19:21:36.470 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.474207 | orchestrator | 19:21:36.470 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.474211 | orchestrator | 19:21:36.470 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.474215 | orchestrator | 19:21:36.470 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-06 19:21:36.474219 | orchestrator | 19:21:36.470 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.474222 | orchestrator | 19:21:36.470 STDOUT terraform:  + size = 20 2025-07-06 19:21:36.474226 | orchestrator | 19:21:36.470 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.474230 | orchestrator | 19:21:36.470 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.474234 | orchestrator | 19:21:36.470 STDOUT terraform:  } 2025-07-06 19:21:36.474238 | orchestrator | 19:21:36.470 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-06 19:21:36.474241 | orchestrator | 19:21:36.471 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:36.474257 | orchestrator | 19:21:36.471 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.474261 | orchestrator | 19:21:36.471 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.474265 | orchestrator | 19:21:36.471 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.474268 | orchestrator | 19:21:36.471 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.474272 | orchestrator | 19:21:36.471 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-06 19:21:36.474276 | orchestrator | 19:21:36.471 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.474285 | orchestrator | 19:21:36.471 STDOUT terraform:  + size = 20 2025-07-06 19:21:36.474289 | orchestrator | 19:21:36.471 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.474293 | orchestrator | 19:21:36.471 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.474298 | orchestrator | 19:21:36.471 STDOUT terraform:  } 2025-07-06 19:21:36.474304 | orchestrator | 19:21:36.471 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-06 19:21:36.474311 | orchestrator | 19:21:36.471 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-06 19:21:36.474318 | orchestrator | 19:21:36.471 STDOUT terraform:  + attachment = (known after apply) 2025-07-06 19:21:36.474324 | orchestrator | 19:21:36.471 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.474330 | orchestrator | 19:21:36.471 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.474336 | orchestrator | 19:21:36.471 STDOUT terraform:  + metadata = (known after apply) 2025-07-06 19:21:36.474342 | orchestrator | 19:21:36.471 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-06 19:21:36.474348 | orchestrator | 19:21:36.471 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.474355 | orchestrator | 19:21:36.471 STDOUT terraform:  + size = 20 2025-07-06 19:21:36.474359 | orchestrator | 19:21:36.471 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-06 19:21:36.474363 | orchestrator | 19:21:36.471 STDOUT terraform:  + volume_type = "ssd" 2025-07-06 19:21:36.474378 | orchestrator | 19:21:36.471 STDOUT terraform:  } 2025-07-06 19:21:36.474382 | orchestrator | 19:21:36.471 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-06 19:21:36.474386 | orchestrator | 19:21:36.471 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-06 19:21:36.474390 | orchestrator | 19:21:36.471 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:36.474394 | orchestrator | 19:21:36.471 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:36.474397 | orchestrator | 19:21:36.471 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:36.474401 | orchestrator | 19:21:36.471 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.474405 | orchestrator | 19:21:36.472 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.474412 | orchestrator | 19:21:36.472 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:36.474416 | orchestrator | 19:21:36.472 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:36.474420 | orchestrator | 19:21:36.472 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:36.474424 | orchestrator | 19:21:36.472 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-06 19:21:36.474427 | orchestrator | 19:21:36.472 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:36.474431 | orchestrator | 19:21:36.472 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:36.474435 | orchestrator | 19:21:36.472 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.474439 | orchestrator | 19:21:36.472 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.474442 | orchestrator | 19:21:36.472 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:36.474446 | orchestrator | 19:21:36.472 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:36.474467 | orchestrator | 19:21:36.472 STDOUT terraform:  + name = "testbed-manager" 2025-07-06 19:21:36.474472 | orchestrator | 19:21:36.472 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:36.474476 | orchestrator | 19:21:36.472 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.474479 | orchestrator | 19:21:36.472 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:36.474483 | orchestrator | 19:21:36.472 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:36.474487 | orchestrator | 19:21:36.472 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:36.474491 | orchestrator | 19:21:36.472 STDOUT terraform:  + user_data = (sensitive value) 2025-07-06 19:21:36.474495 | orchestrator | 19:21:36.472 STDOUT terraform:  + block_device { 2025-07-06 19:21:36.474498 | orchestrator | 19:21:36.472 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:36.474502 | orchestrator | 19:21:36.472 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:36.474509 | orchestrator | 19:21:36.472 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:36.474513 | orchestrator | 19:21:36.472 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:36.474516 | orchestrator | 19:21:36.472 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:36.474520 | orchestrator | 19:21:36.472 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.474524 | orchestrator | 19:21:36.472 STDOUT terraform:  } 2025-07-06 19:21:36.474528 | orchestrator | 19:21:36.472 STDOUT terraform:  + network { 2025-07-06 19:21:36.474532 | orchestrator | 19:21:36.472 STDOUT terraform:  + access_network = false 2025-07-06 19:21:36.474535 | orchestrator | 19:21:36.472 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:36.474541 | orchestrator | 19:21:36.472 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:36.474545 | orchestrator | 19:21:36.472 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:36.474558 | orchestrator | 19:21:36.472 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:36.474562 | orchestrator | 19:21:36.472 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:36.474566 | orchestrator | 19:21:36.472 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.474570 | orchestrator | 19:21:36.473 STDOUT terraform:  } 2025-07-06 19:21:36.474574 | orchestrator | 19:21:36.473 STDOUT terraform:  } 2025-07-06 19:21:36.474578 | orchestrator | 19:21:36.473 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-06 19:21:36.474583 | orchestrator | 19:21:36.473 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:36.474587 | orchestrator | 19:21:36.473 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:36.474591 | orchestrator | 19:21:36.473 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:36.474595 | orchestrator | 19:21:36.473 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:36.474599 | orchestrator | 19:21:36.473 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.474603 | orchestrator | 19:21:36.473 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.474607 | orchestrator | 19:21:36.473 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:36.474612 | orchestrator | 19:21:36.473 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:36.474616 | orchestrator | 19:21:36.473 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:36.474620 | orchestrator | 19:21:36.473 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:36.474624 | orchestrator | 19:21:36.473 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:36.474628 | orchestrator | 19:21:36.473 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:36.474632 | orchestrator | 19:21:36.473 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.474636 | orchestrator | 19:21:36.473 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.474641 | orchestrator | 19:21:36.473 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:36.474645 | orchestrator | 19:21:36.473 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:36.474649 | orchestrator | 19:21:36.473 STDOUT terraform:  + name = "testbed-node-0" 2025-07-06 19:21:36.474653 | orchestrator | 19:21:36.473 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:36.474657 | orchestrator | 19:21:36.473 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.474661 | orchestrator | 19:21:36.473 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:36.474665 | orchestrator | 19:21:36.473 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:36.474669 | orchestrator | 19:21:36.473 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:36.474674 | orchestrator | 19:21:36.473 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:36.474678 | orchestrator | 19:21:36.473 STDOUT terraform:  + block_device { 2025-07-06 19:21:36.474685 | orchestrator | 19:21:36.473 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:36.474689 | orchestrator | 19:21:36.473 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:36.474694 | orchestrator | 19:21:36.473 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:36.474698 | orchestrator | 19:21:36.473 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:36.474702 | orchestrator | 19:21:36.474 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:36.474706 | orchestrator | 19:21:36.474 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.474710 | orchestrator | 19:21:36.474 STDOUT terraform:  } 2025-07-06 19:21:36.474714 | orchestrator | 19:21:36.474 STDOUT terraform:  + network { 2025-07-06 19:21:36.474722 | orchestrator | 19:21:36.474 STDOUT terraform:  + access_network = false 2025-07-06 19:21:36.474729 | orchestrator | 19:21:36.474 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:36.474733 | orchestrator | 19:21:36.474 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:36.474738 | orchestrator | 19:21:36.474 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:36.474742 | orchestrator | 19:21:36.474 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:36.474746 | orchestrator | 19:21:36.474 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:36.474750 | orchestrator | 19:21:36.474 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.474754 | orchestrator | 19:21:36.474 STDOUT terraform:  } 2025-07-06 19:21:36.474759 | orchestrator | 19:21:36.474 STDOUT terraform:  } 2025-07-06 19:21:36.474763 | orchestrator | 19:21:36.474 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-06 19:21:36.474767 | orchestrator | 19:21:36.474 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:36.474771 | orchestrator | 19:21:36.474 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:36.474775 | orchestrator | 19:21:36.474 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:36.474782 | orchestrator | 19:21:36.474 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:36.474786 | orchestrator | 19:21:36.474 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.478093 | orchestrator | 19:21:36.474 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.478137 | orchestrator | 19:21:36.474 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:36.478143 | orchestrator | 19:21:36.474 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:36.478148 | orchestrator | 19:21:36.474 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:36.478153 | orchestrator | 19:21:36.474 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:36.478157 | orchestrator | 19:21:36.474 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:36.478162 | orchestrator | 19:21:36.474 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:36.478177 | orchestrator | 19:21:36.474 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.478182 | orchestrator | 19:21:36.474 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.478187 | orchestrator | 19:21:36.475 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:36.478191 | orchestrator | 19:21:36.475 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:36.478196 | orchestrator | 19:21:36.475 STDOUT terraform:  + name = "testbed-node-1" 2025-07-06 19:21:36.478200 | orchestrator | 19:21:36.475 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:36.478205 | orchestrator | 19:21:36.475 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.478209 | orchestrator | 19:21:36.475 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:36.478214 | orchestrator | 19:21:36.475 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:36.478218 | orchestrator | 19:21:36.475 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:36.478223 | orchestrator | 19:21:36.475 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:36.478228 | orchestrator | 19:21:36.475 STDOUT terraform:  + block_device { 2025-07-06 19:21:36.478232 | orchestrator | 19:21:36.475 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:36.478237 | orchestrator | 19:21:36.475 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:36.478241 | orchestrator | 19:21:36.475 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:36.478246 | orchestrator | 19:21:36.475 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:36.478251 | orchestrator | 19:21:36.475 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:36.478255 | orchestrator | 19:21:36.475 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.478260 | orchestrator | 19:21:36.475 STDOUT terraform:  } 2025-07-06 19:21:36.478265 | orchestrator | 19:21:36.475 STDOUT terraform:  + network { 2025-07-06 19:21:36.478269 | orchestrator | 19:21:36.475 STDOUT terraform:  + access_network = false 2025-07-06 19:21:36.478274 | orchestrator | 19:21:36.475 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:36.478278 | orchestrator | 19:21:36.475 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:36.478283 | orchestrator | 19:21:36.475 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:36.478287 | orchestrator | 19:21:36.475 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:36.478292 | orchestrator | 19:21:36.475 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:36.478296 | orchestrator | 19:21:36.475 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.478301 | orchestrator | 19:21:36.475 STDOUT terraform:  } 2025-07-06 19:21:36.478305 | orchestrator | 19:21:36.475 STDOUT terraform:  } 2025-07-06 19:21:36.478310 | orchestrator | 19:21:36.475 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-06 19:21:36.478314 | orchestrator | 19:21:36.475 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:36.478332 | orchestrator | 19:21:36.475 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:36.478337 | orchestrator | 19:21:36.475 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:36.478341 | orchestrator | 19:21:36.475 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:36.478352 | orchestrator | 19:21:36.475 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.478357 | orchestrator | 19:21:36.475 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.478361 | orchestrator | 19:21:36.475 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:36.478366 | orchestrator | 19:21:36.475 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:36.478370 | orchestrator | 19:21:36.475 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:36.478375 | orchestrator | 19:21:36.475 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:36.478380 | orchestrator | 19:21:36.475 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:36.478384 | orchestrator | 19:21:36.475 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:36.478389 | orchestrator | 19:21:36.475 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.478393 | orchestrator | 19:21:36.476 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.478398 | orchestrator | 19:21:36.476 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:36.478402 | orchestrator | 19:21:36.476 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:36.478407 | orchestrator | 19:21:36.476 STDOUT terraform:  + name = "testbed-node-2" 2025-07-06 19:21:36.478411 | orchestrator | 19:21:36.476 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:36.478416 | orchestrator | 19:21:36.476 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.478420 | orchestrator | 19:21:36.476 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:36.478425 | orchestrator | 19:21:36.476 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:36.478431 | orchestrator | 19:21:36.476 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:36.478436 | orchestrator | 19:21:36.476 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:36.478441 | orchestrator | 19:21:36.476 STDOUT terraform:  + block_device { 2025-07-06 19:21:36.478445 | orchestrator | 19:21:36.476 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:36.478466 | orchestrator | 19:21:36.476 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:36.478471 | orchestrator | 19:21:36.476 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:36.478475 | orchestrator | 19:21:36.476 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:36.478480 | orchestrator | 19:21:36.476 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:36.478484 | orchestrator | 19:21:36.476 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.478496 | orchestrator | 19:21:36.476 STDOUT terraform:  } 2025-07-06 19:21:36.478501 | orchestrator | 19:21:36.476 STDOUT terraform:  + network { 2025-07-06 19:21:36.478506 | orchestrator | 19:21:36.476 STDOUT terraform:  + access_network = false 2025-07-06 19:21:36.478510 | orchestrator | 19:21:36.476 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:36.478515 | orchestrator | 19:21:36.476 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:36.478519 | orchestrator | 19:21:36.476 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:36.478524 | orchestrator | 19:21:36.476 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:36.478528 | orchestrator | 19:21:36.476 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:36.478533 | orchestrator | 19:21:36.476 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.478542 | orchestrator | 19:21:36.476 STDOUT terraform:  } 2025-07-06 19:21:36.478549 | orchestrator | 19:21:36.476 STDOUT terraform:  } 2025-07-06 19:21:36.478554 | orchestrator | 19:21:36.476 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-06 19:21:36.478559 | orchestrator | 19:21:36.476 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:36.478563 | orchestrator | 19:21:36.476 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:36.478568 | orchestrator | 19:21:36.476 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:36.478572 | orchestrator | 19:21:36.476 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:36.478577 | orchestrator | 19:21:36.476 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.478581 | orchestrator | 19:21:36.476 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.478586 | orchestrator | 19:21:36.476 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:36.478590 | orchestrator | 19:21:36.476 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:36.478595 | orchestrator | 19:21:36.476 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:36.478599 | orchestrator | 19:21:36.476 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:36.478604 | orchestrator | 19:21:36.476 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:36.478609 | orchestrator | 19:21:36.476 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:36.478613 | orchestrator | 19:21:36.477 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.478618 | orchestrator | 19:21:36.477 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.478622 | orchestrator | 19:21:36.477 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:36.478627 | orchestrator | 19:21:36.477 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:36.478631 | orchestrator | 19:21:36.477 STDOUT terraform:  + name = "testbed-node-3" 2025-07-06 19:21:36.478636 | orchestrator | 19:21:36.477 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:36.478643 | orchestrator | 19:21:36.477 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.478648 | orchestrator | 19:21:36.477 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:36.478653 | orchestrator | 19:21:36.477 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:36.478657 | orchestrator | 19:21:36.477 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:36.478662 | orchestrator | 19:21:36.477 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:36.478669 | orchestrator | 19:21:36.477 STDOUT terraform:  + block_device { 2025-07-06 19:21:36.478674 | orchestrator | 19:21:36.477 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:36.478678 | orchestrator | 19:21:36.477 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:36.478683 | orchestrator | 19:21:36.477 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:36.478687 | orchestrator | 19:21:36.477 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:36.478692 | orchestrator | 19:21:36.477 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:36.478696 | orchestrator | 19:21:36.477 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.478701 | orchestrator | 19:21:36.477 STDOUT terraform:  } 2025-07-06 19:21:36.478706 | orchestrator | 19:21:36.477 STDOUT terraform:  + network { 2025-07-06 19:21:36.478710 | orchestrator | 19:21:36.477 STDOUT terraform:  + access_network = false 2025-07-06 19:21:36.478715 | orchestrator | 19:21:36.477 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:36.478719 | orchestrator | 19:21:36.477 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:36.478727 | orchestrator | 19:21:36.477 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:36.478731 | orchestrator | 19:21:36.477 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:36.478736 | orchestrator | 19:21:36.477 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:36.478741 | orchestrator | 19:21:36.477 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.478745 | orchestrator | 19:21:36.477 STDOUT terraform:  } 2025-07-06 19:21:36.478750 | orchestrator | 19:21:36.477 STDOUT terraform:  } 2025-07-06 19:21:36.478755 | orchestrator | 19:21:36.477 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-06 19:21:36.478759 | orchestrator | 19:21:36.477 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:36.478764 | orchestrator | 19:21:36.477 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:36.478768 | orchestrator | 19:21:36.477 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:36.478773 | orchestrator | 19:21:36.477 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:36.478778 | orchestrator | 19:21:36.477 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.478782 | orchestrator | 19:21:36.477 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.478790 | orchestrator | 19:21:36.477 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:36.478795 | orchestrator | 19:21:36.477 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:36.478800 | orchestrator | 19:21:36.477 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:36.478804 | orchestrator | 19:21:36.478 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:36.478809 | orchestrator | 19:21:36.478 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:36.478814 | orchestrator | 19:21:36.478 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:36.478818 | orchestrator | 19:21:36.478 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.478823 | orchestrator | 19:21:36.478 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.478828 | orchestrator | 19:21:36.478 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:36.478832 | orchestrator | 19:21:36.478 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:36.478837 | orchestrator | 19:21:36.478 STDOUT terraform:  + name = "testbed-node-4" 2025-07-06 19:21:36.478841 | orchestrator | 19:21:36.478 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:36.478846 | orchestrator | 19:21:36.478 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.478851 | orchestrator | 19:21:36.478 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:36.478855 | orchestrator | 19:21:36.478 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:36.478860 | orchestrator | 19:21:36.478 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:36.478864 | orchestrator | 19:21:36.478 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:36.478869 | orchestrator | 19:21:36.478 STDOUT terraform:  + block_device { 2025-07-06 19:21:36.478874 | orchestrator | 19:21:36.478 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:36.478878 | orchestrator | 19:21:36.478 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:36.478883 | orchestrator | 19:21:36.478 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:36.478887 | orchestrator | 19:21:36.478 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:36.478892 | orchestrator | 19:21:36.478 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:36.478896 | orchestrator | 19:21:36.478 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.478901 | orchestrator | 19:21:36.478 STDOUT terraform:  } 2025-07-06 19:21:36.478908 | orchestrator | 19:21:36.478 STDOUT terraform:  + network { 2025-07-06 19:21:36.478913 | orchestrator | 19:21:36.478 STDOUT terraform:  + access_network = false 2025-07-06 19:21:36.478917 | orchestrator | 19:21:36.478 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:36.478922 | orchestrator | 19:21:36.478 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:36.478927 | orchestrator | 19:21:36.478 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:36.478935 | orchestrator | 19:21:36.478 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:36.478939 | orchestrator | 19:21:36.478 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:36.478944 | orchestrator | 19:21:36.478 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.478948 | orchestrator | 19:21:36.478 STDOUT terraform:  } 2025-07-06 19:21:36.478953 | orchestrator | 19:21:36.478 STDOUT terraform:  } 2025-07-06 19:21:36.478958 | orchestrator | 19:21:36.478 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-06 19:21:36.478963 | orchestrator | 19:21:36.478 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-06 19:21:36.478969 | orchestrator | 19:21:36.478 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-06 19:21:36.478974 | orchestrator | 19:21:36.478 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-06 19:21:36.479064 | orchestrator | 19:21:36.478 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-06 19:21:36.479089 | orchestrator | 19:21:36.479 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.479118 | orchestrator | 19:21:36.479 STDOUT terraform:  + availability_zone = "nova" 2025-07-06 19:21:36.479126 | orchestrator | 19:21:36.479 STDOUT terraform:  + config_drive = true 2025-07-06 19:21:36.479186 | orchestrator | 19:21:36.479 STDOUT terraform:  + created = (known after apply) 2025-07-06 19:21:36.479195 | orchestrator | 19:21:36.479 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-06 19:21:36.479236 | orchestrator | 19:21:36.479 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-06 19:21:36.479246 | orchestrator | 19:21:36.479 STDOUT terraform:  + force_delete = false 2025-07-06 19:21:36.479285 | orchestrator | 19:21:36.479 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-06 19:21:36.479320 | orchestrator | 19:21:36.479 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.479355 | orchestrator | 19:21:36.479 STDOUT terraform:  + image_id = (known after apply) 2025-07-06 19:21:36.479391 | orchestrator | 19:21:36.479 STDOUT terraform:  + image_name = (known after apply) 2025-07-06 19:21:36.479414 | orchestrator | 19:21:36.479 STDOUT terraform:  + key_pair = "testbed" 2025-07-06 19:21:36.479444 | orchestrator | 19:21:36.479 STDOUT terraform:  + name = "testbed-node-5" 2025-07-06 19:21:36.479483 | orchestrator | 19:21:36.479 STDOUT terraform:  + power_state = "active" 2025-07-06 19:21:36.479523 | orchestrator | 19:21:36.479 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.479557 | orchestrator | 19:21:36.479 STDOUT terraform:  + security_groups = (known after apply) 2025-07-06 19:21:36.479585 | orchestrator | 19:21:36.479 STDOUT terraform:  + stop_before_destroy = false 2025-07-06 19:21:36.479615 | orchestrator | 19:21:36.479 STDOUT terraform:  + updated = (known after apply) 2025-07-06 19:21:36.479688 | orchestrator | 19:21:36.479 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-06 19:21:36.479697 | orchestrator | 19:21:36.479 STDOUT terraform:  + block_device { 2025-07-06 19:21:36.479713 | orchestrator | 19:21:36.479 STDOUT terraform:  + boot_index = 0 2025-07-06 19:21:36.479744 | orchestrator | 19:21:36.479 STDOUT terraform:  + delete_on_termination = false 2025-07-06 19:21:36.479756 | orchestrator | 19:21:36.479 STDOUT terraform:  + destination_type = "volume" 2025-07-06 19:21:36.479792 | orchestrator | 19:21:36.479 STDOUT terraform:  + multiattach = false 2025-07-06 19:21:36.479823 | orchestrator | 19:21:36.479 STDOUT terraform:  + source_type = "volume" 2025-07-06 19:21:36.479862 | orchestrator | 19:21:36.479 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.479869 | orchestrator | 19:21:36.479 STDOUT terraform:  } 2025-07-06 19:21:36.479875 | orchestrator | 19:21:36.479 STDOUT terraform:  + network { 2025-07-06 19:21:36.479903 | orchestrator | 19:21:36.479 STDOUT terraform:  + access_network = false 2025-07-06 19:21:36.479935 | orchestrator | 19:21:36.479 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-06 19:21:36.479966 | orchestrator | 19:21:36.479 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-06 19:21:36.480014 | orchestrator | 19:21:36.479 STDOUT terraform:  + mac = (known after apply) 2025-07-06 19:21:36.480060 | orchestrator | 19:21:36.480 STDOUT terraform:  + name = (known after apply) 2025-07-06 19:21:36.480107 | orchestrator | 19:21:36.480 STDOUT terraform:  + port = (known after apply) 2025-07-06 19:21:36.480154 | orchestrator | 19:21:36.480 STDOUT terraform:  + uuid = (known after apply) 2025-07-06 19:21:36.480161 | orchestrator | 19:21:36.480 STDOUT terraform:  } 2025-07-06 19:21:36.480191 | orchestrator | 19:21:36.480 STDOUT terraform:  } 2025-07-06 19:21:36.480240 | orchestrator | 19:21:36.480 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-06 19:21:36.480293 | orchestrator | 19:21:36.480 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-06 19:21:36.480335 | orchestrator | 19:21:36.480 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-06 19:21:36.480438 | orchestrator | 19:21:36.480 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.480445 | orchestrator | 19:21:36.480 STDOUT terraform:  + name = "testbed" 2025-07-06 19:21:36.480488 | orchestrator | 19:21:36.480 STDOUT terraform:  + private_key = (sensitive value) 2025-07-06 19:21:36.480516 | orchestrator | 19:21:36.480 STDOUT terraform:  + public_key = (known after apply) 2025-07-06 19:21:36.480546 | orchestrator | 19:21:36.480 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.480580 | orchestrator | 19:21:36.480 STDOUT terraform:  + user_id = (known after apply) 2025-07-06 19:21:36.480587 | orchestrator | 19:21:36.480 STDOUT terraform:  } 2025-07-06 19:21:36.480637 | orchestrator | 19:21:36.480 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-06 19:21:36.480686 | orchestrator | 19:21:36.480 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:36.480713 | orchestrator | 19:21:36.480 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:36.480742 | orchestrator | 19:21:36.480 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.480822 | orchestrator | 19:21:36.480 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:36.480862 | orchestrator | 19:21:36.480 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.480892 | orchestrator | 19:21:36.480 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:36.480899 | orchestrator | 19:21:36.480 STDOUT terraform:  } 2025-07-06 19:21:36.480953 | orchestrator | 19:21:36.480 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-06 19:21:36.481001 | orchestrator | 19:21:36.480 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:36.481031 | orchestrator | 19:21:36.480 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:36.481062 | orchestrator | 19:21:36.481 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.481090 | orchestrator | 19:21:36.481 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:36.481122 | orchestrator | 19:21:36.481 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.481149 | orchestrator | 19:21:36.481 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:36.481156 | orchestrator | 19:21:36.481 STDOUT terraform:  } 2025-07-06 19:21:36.481205 | orchestrator | 19:21:36.481 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-06 19:21:36.481254 | orchestrator | 19:21:36.481 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:36.481282 | orchestrator | 19:21:36.481 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:36.481310 | orchestrator | 19:21:36.481 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.481337 | orchestrator | 19:21:36.481 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:36.481365 | orchestrator | 19:21:36.481 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.481393 | orchestrator | 19:21:36.481 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:36.481398 | orchestrator | 19:21:36.481 STDOUT terraform:  } 2025-07-06 19:21:36.481463 | orchestrator | 19:21:36.481 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-06 19:21:36.481533 | orchestrator | 19:21:36.481 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:36.481560 | orchestrator | 19:21:36.481 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:36.481590 | orchestrator | 19:21:36.481 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.481620 | orchestrator | 19:21:36.481 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:36.481649 | orchestrator | 19:21:36.481 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.481698 | orchestrator | 19:21:36.481 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:36.481732 | orchestrator | 19:21:36.481 STDOUT terraform:  } 2025-07-06 19:21:36.481783 | orchestrator | 19:21:36.481 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-06 19:21:36.481832 | orchestrator | 19:21:36.481 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:36.481861 | orchestrator | 19:21:36.481 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:36.481893 | orchestrator | 19:21:36.481 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.481914 | orchestrator | 19:21:36.481 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:36.481945 | orchestrator | 19:21:36.481 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.481975 | orchestrator | 19:21:36.481 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:36.481982 | orchestrator | 19:21:36.481 STDOUT terraform:  } 2025-07-06 19:21:36.482056 | orchestrator | 19:21:36.481 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-06 19:21:36.482106 | orchestrator | 19:21:36.482 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:36.482135 | orchestrator | 19:21:36.482 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:36.482166 | orchestrator | 19:21:36.482 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.482193 | orchestrator | 19:21:36.482 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:36.482222 | orchestrator | 19:21:36.482 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.482250 | orchestrator | 19:21:36.482 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:36.482257 | orchestrator | 19:21:36.482 STDOUT terraform:  } 2025-07-06 19:21:36.482308 | orchestrator | 19:21:36.482 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-06 19:21:36.482358 | orchestrator | 19:21:36.482 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:36.482399 | orchestrator | 19:21:36.482 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:36.482428 | orchestrator | 19:21:36.482 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.482488 | orchestrator | 19:21:36.482 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:36.483322 | orchestrator | 19:21:36.482 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.483365 | orchestrator | 19:21:36.483 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:36.483373 | orchestrator | 19:21:36.483 STDOUT terraform:  } 2025-07-06 19:21:36.483438 | orchestrator | 19:21:36.483 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-06 19:21:36.483520 | orchestrator | 19:21:36.483 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:36.483542 | orchestrator | 19:21:36.483 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:36.483572 | orchestrator | 19:21:36.483 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.483600 | orchestrator | 19:21:36.483 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:36.483629 | orchestrator | 19:21:36.483 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.483660 | orchestrator | 19:21:36.483 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:36.483667 | orchestrator | 19:21:36.483 STDOUT terraform:  } 2025-07-06 19:21:36.483721 | orchestrator | 19:21:36.483 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-06 19:21:36.483768 | orchestrator | 19:21:36.483 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-06 19:21:36.483796 | orchestrator | 19:21:36.483 STDOUT terraform:  + device = (known after apply) 2025-07-06 19:21:36.483825 | orchestrator | 19:21:36.483 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.483854 | orchestrator | 19:21:36.483 STDOUT terraform:  + instance_id = (known after apply) 2025-07-06 19:21:36.483881 | orchestrator | 19:21:36.483 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.483908 | orchestrator | 19:21:36.483 STDOUT terraform:  + volume_id = (known after apply) 2025-07-06 19:21:36.483912 | orchestrator | 19:21:36.483 STDOUT terraform:  } 2025-07-06 19:21:36.483975 | orchestrator | 19:21:36.483 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-06 19:21:36.484103 | orchestrator | 19:21:36.483 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-06 19:21:36.484123 | orchestrator | 19:21:36.484 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-06 19:21:36.484150 | orchestrator | 19:21:36.484 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-06 19:21:36.484179 | orchestrator | 19:21:36.484 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.484209 | orchestrator | 19:21:36.484 STDOUT terraform:  + port_id = (known after apply) 2025-07-06 19:21:36.484237 | orchestrator | 19:21:36.484 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.484242 | orchestrator | 19:21:36.484 STDOUT terraform:  } 2025-07-06 19:21:36.484291 | orchestrator | 19:21:36.484 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-06 19:21:36.484337 | orchestrator | 19:21:36.484 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-06 19:21:36.484363 | orchestrator | 19:21:36.484 STDOUT terraform:  + address = (known after apply) 2025-07-06 19:21:36.484388 | orchestrator | 19:21:36.484 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.484407 | orchestrator | 19:21:36.484 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-06 19:21:36.484431 | orchestrator | 19:21:36.484 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:36.484475 | orchestrator | 19:21:36.484 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-06 19:21:36.484509 | orchestrator | 19:21:36.484 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.484533 | orchestrator | 19:21:36.484 STDOUT terraform:  + pool = "public" 2025-07-06 19:21:36.484558 | orchestrator | 19:21:36.484 STDOUT terraform:  + port_id = (known after apply) 2025-07-06 19:21:36.484583 | orchestrator | 19:21:36.484 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.484608 | orchestrator | 19:21:36.484 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:36.484618 | orchestrator | 19:21:36.484 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.484638 | orchestrator | 19:21:36.484 STDOUT terraform:  } 2025-07-06 19:21:36.484682 | orchestrator | 19:21:36.484 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-06 19:21:36.484724 | orchestrator | 19:21:36.484 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-06 19:21:36.484760 | orchestrator | 19:21:36.484 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:36.484828 | orchestrator | 19:21:36.484 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.484847 | orchestrator | 19:21:36.484 STDOUT terraform:  + availability_zone_hints = [ 2025-07-06 19:21:36.484883 | orchestrator | 19:21:36.484 STDOUT terraform:  + "nova", 2025-07-06 19:21:36.484889 | orchestrator | 19:21:36.484 STDOUT terraform:  ] 2025-07-06 19:21:36.484929 | orchestrator | 19:21:36.484 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-06 19:21:36.484981 | orchestrator | 19:21:36.484 STDOUT terraform:  + external = (known after apply) 2025-07-06 19:21:36.485022 | orchestrator | 19:21:36.484 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.485214 | orchestrator | 19:21:36.485 STDOUT terraform:  + mtu = (known after apply) 2025-07-06 19:21:36.485223 | orchestrator | 19:21:36.485 STDOUT terraform:  + name = "net-testbed-management" 2025-07-06 19:21:36.485227 | orchestrator | 19:21:36.485 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:36.485233 | orchestrator | 19:21:36.485 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:36.485254 | orchestrator | 19:21:36.485 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.485282 | orchestrator | 19:21:36.485 STDOUT terraform:  + shared = (known after apply) 2025-07-06 19:21:36.485321 | orchestrator | 19:21:36.485 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.485371 | orchestrator | 19:21:36.485 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-06 19:21:36.485391 | orchestrator | 19:21:36.485 STDOUT terraform:  + segments (known after apply) 2025-07-06 19:21:36.485396 | orchestrator | 19:21:36.485 STDOUT terraform:  } 2025-07-06 19:21:36.485528 | orchestrator | 19:21:36.485 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-06 19:21:36.485539 | orchestrator | 19:21:36.485 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-06 19:21:36.485565 | orchestrator | 19:21:36.485 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:36.485636 | orchestrator | 19:21:36.485 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:36.485662 | orchestrator | 19:21:36.485 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:36.486370 | orchestrator | 19:21:36.485 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.486441 | orchestrator | 19:21:36.485 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:36.486524 | orchestrator | 19:21:36.485 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:36.486535 | orchestrator | 19:21:36.485 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:36.486542 | orchestrator | 19:21:36.485 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:36.486550 | orchestrator | 19:21:36.485 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.486557 | orchestrator | 19:21:36.485 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:36.486564 | orchestrator | 19:21:36.485 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:36.486571 | orchestrator | 19:21:36.485 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:36.486579 | orchestrator | 19:21:36.485 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:36.486586 | orchestrator | 19:21:36.486 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.486593 | orchestrator | 19:21:36.486 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:36.486600 | orchestrator | 19:21:36.486 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.486608 | orchestrator | 19:21:36.486 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.486615 | orchestrator | 19:21:36.486 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:36.486623 | orchestrator | 19:21:36.486 STDOUT terraform:  } 2025-07-06 19:21:36.486630 | orchestrator | 19:21:36.486 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.486638 | orchestrator | 19:21:36.486 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:36.486645 | orchestrator | 19:21:36.486 STDOUT terraform:  } 2025-07-06 19:21:36.486652 | orchestrator | 19:21:36.486 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:36.486660 | orchestrator | 19:21:36.486 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:36.486667 | orchestrator | 19:21:36.486 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-06 19:21:36.486683 | orchestrator | 19:21:36.486 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:36.486690 | orchestrator | 19:21:36.486 STDOUT terraform:  } 2025-07-06 19:21:36.486697 | orchestrator | 19:21:36.486 STDOUT terraform:  } 2025-07-06 19:21:36.486705 | orchestrator | 19:21:36.486 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-06 19:21:36.486713 | orchestrator | 19:21:36.486 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:36.486725 | orchestrator | 19:21:36.486 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:36.486732 | orchestrator | 19:21:36.486 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:36.486740 | orchestrator | 19:21:36.486 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:36.486747 | orchestrator | 19:21:36.486 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.486755 | orchestrator | 19:21:36.486 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:36.486768 | orchestrator | 19:21:36.486 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:36.486778 | orchestrator | 19:21:36.486 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:36.486786 | orchestrator | 19:21:36.486 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:36.486793 | orchestrator | 19:21:36.486 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.486800 | orchestrator | 19:21:36.486 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:36.486810 | orchestrator | 19:21:36.486 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:36.487095 | orchestrator | 19:21:36.486 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:36.487111 | orchestrator | 19:21:36.486 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:36.487119 | orchestrator | 19:21:36.486 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.487126 | orchestrator | 19:21:36.486 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:36.487133 | orchestrator | 19:21:36.486 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.487141 | orchestrator | 19:21:36.486 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.487148 | orchestrator | 19:21:36.486 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:36.487155 | orchestrator | 19:21:36.486 STDOUT terraform:  } 2025-07-06 19:21:36.487163 | orchestrator | 19:21:36.487 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.487170 | orchestrator | 19:21:36.487 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:36.487177 | orchestrator | 19:21:36.487 STDOUT terraform:  } 2025-07-06 19:21:36.487184 | orchestrator | 19:21:36.487 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.487195 | orchestrator | 19:21:36.487 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:36.487203 | orchestrator | 19:21:36.487 STDOUT terraform:  } 2025-07-06 19:21:36.487210 | orchestrator | 19:21:36.487 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.487218 | orchestrator | 19:21:36.487 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:36.487225 | orchestrator | 19:21:36.487 STDOUT terraform:  } 2025-07-06 19:21:36.487232 | orchestrator | 19:21:36.487 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:36.487240 | orchestrator | 19:21:36.487 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:36.487247 | orchestrator | 19:21:36.487 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-06 19:21:36.487257 | orchestrator | 19:21:36.487 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:36.487264 | orchestrator | 19:21:36.487 STDOUT terraform:  } 2025-07-06 19:21:36.487271 | orchestrator | 19:21:36.487 STDOUT terraform:  } 2025-07-06 19:21:36.487338 | orchestrator | 19:21:36.487 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-06 19:21:36.488268 | orchestrator | 19:21:36.487 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:36.488320 | orchestrator | 19:21:36.487 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:36.488328 | orchestrator | 19:21:36.487 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:36.488336 | orchestrator | 19:21:36.487 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:36.488343 | orchestrator | 19:21:36.487 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.488350 | orchestrator | 19:21:36.487 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:36.488357 | orchestrator | 19:21:36.487 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:36.488364 | orchestrator | 19:21:36.487 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:36.488372 | orchestrator | 19:21:36.487 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:36.488379 | orchestrator | 19:21:36.487 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.488386 | orchestrator | 19:21:36.487 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:36.488393 | orchestrator | 19:21:36.487 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:36.488400 | orchestrator | 19:21:36.487 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:36.488407 | orchestrator | 19:21:36.487 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:36.488414 | orchestrator | 19:21:36.487 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.488421 | orchestrator | 19:21:36.487 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:36.488428 | orchestrator | 19:21:36.487 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.488435 | orchestrator | 19:21:36.487 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.488443 | orchestrator | 19:21:36.487 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:36.488477 | orchestrator | 19:21:36.488 STDOUT terraform:  } 2025-07-06 19:21:36.488487 | orchestrator | 19:21:36.488 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.488495 | orchestrator | 19:21:36.488 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:36.488502 | orchestrator | 19:21:36.488 STDOUT terraform:  } 2025-07-06 19:21:36.488509 | orchestrator | 19:21:36.488 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.488516 | orchestrator | 19:21:36.488 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:36.488523 | orchestrator | 19:21:36.488 STDOUT terraform:  } 2025-07-06 19:21:36.488530 | orchestrator | 19:21:36.488 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.488538 | orchestrator | 19:21:36.488 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:36.488545 | orchestrator | 19:21:36.488 STDOUT terraform:  } 2025-07-06 19:21:36.488552 | orchestrator | 19:21:36.488 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:36.488559 | orchestrator | 19:21:36.488 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:36.488577 | orchestrator | 19:21:36.488 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-06 19:21:36.488585 | orchestrator | 19:21:36.488 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:36.488592 | orchestrator | 19:21:36.488 STDOUT terraform:  } 2025-07-06 19:21:36.488599 | orchestrator | 19:21:36.488 STDOUT terraform:  } 2025-07-06 19:21:36.488607 | orchestrator | 19:21:36.488 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-06 19:21:36.488614 | orchestrator | 19:21:36.488 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:36.488621 | orchestrator | 19:21:36.488 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:36.488628 | orchestrator | 19:21:36.488 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:36.488643 | orchestrator | 19:21:36.488 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:36.488650 | orchestrator | 19:21:36.488 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.488658 | orchestrator | 19:21:36.488 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:36.488667 | orchestrator | 19:21:36.488 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:36.488678 | orchestrator | 19:21:36.488 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:36.488685 | orchestrator | 19:21:36.488 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:36.488695 | orchestrator | 19:21:36.488 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.489248 | orchestrator | 19:21:36.488 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:36.489296 | orchestrator | 19:21:36.488 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:36.489311 | orchestrator | 19:21:36.488 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:36.489324 | orchestrator | 19:21:36.488 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:36.489338 | orchestrator | 19:21:36.488 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.489350 | orchestrator | 19:21:36.488 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:36.489363 | orchestrator | 19:21:36.488 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.489375 | orchestrator | 19:21:36.488 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.489388 | orchestrator | 19:21:36.488 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:36.489401 | orchestrator | 19:21:36.488 STDOUT terraform:  } 2025-07-06 19:21:36.489413 | orchestrator | 19:21:36.488 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.489426 | orchestrator | 19:21:36.488 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:36.489440 | orchestrator | 19:21:36.488 STDOUT terraform:  } 2025-07-06 19:21:36.489512 | orchestrator | 19:21:36.489 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.489529 | orchestrator | 19:21:36.489 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:36.489555 | orchestrator | 19:21:36.489 STDOUT terraform:  } 2025-07-06 19:21:36.489568 | orchestrator | 19:21:36.489 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.489579 | orchestrator | 19:21:36.489 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:36.489591 | orchestrator | 19:21:36.489 STDOUT terraform:  } 2025-07-06 19:21:36.489603 | orchestrator | 19:21:36.489 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:36.489615 | orchestrator | 19:21:36.489 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:36.489626 | orchestrator | 19:21:36.489 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-06 19:21:36.489637 | orchestrator | 19:21:36.489 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:36.489649 | orchestrator | 19:21:36.489 STDOUT terraform:  } 2025-07-06 19:21:36.489660 | orchestrator | 19:21:36.489 STDOUT terraform:  } 2025-07-06 19:21:36.489677 | orchestrator | 19:21:36.489 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-06 19:21:36.489690 | orchestrator | 19:21:36.489 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:36.489701 | orchestrator | 19:21:36.489 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:36.489713 | orchestrator | 19:21:36.489 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:36.489724 | orchestrator | 19:21:36.489 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:36.489734 | orchestrator | 19:21:36.489 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.489745 | orchestrator | 19:21:36.489 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:36.489756 | orchestrator | 19:21:36.489 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:36.489767 | orchestrator | 19:21:36.489 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:36.489778 | orchestrator | 19:21:36.489 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:36.489795 | orchestrator | 19:21:36.489 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.489807 | orchestrator | 19:21:36.489 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:36.489818 | orchestrator | 19:21:36.489 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:36.489833 | orchestrator | 19:21:36.489 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:36.489844 | orchestrator | 19:21:36.489 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:36.489855 | orchestrator | 19:21:36.489 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.489866 | orchestrator | 19:21:36.489 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:36.489877 | orchestrator | 19:21:36.489 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.489892 | orchestrator | 19:21:36.489 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.489903 | orchestrator | 19:21:36.489 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:36.489922 | orchestrator | 19:21:36.489 STDOUT terraform:  } 2025-07-06 19:21:36.489934 | orchestrator | 19:21:36.489 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.489948 | orchestrator | 19:21:36.489 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:36.489959 | orchestrator | 19:21:36.489 STDOUT terraform:  } 2025-07-06 19:21:36.489970 | orchestrator | 19:21:36.489 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.489985 | orchestrator | 19:21:36.489 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:36.489997 | orchestrator | 19:21:36.489 STDOUT terraform:  } 2025-07-06 19:21:36.490008 | orchestrator | 19:21:36.489 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.490055 | orchestrator | 19:21:36.489 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:36.490067 | orchestrator | 19:21:36.490 STDOUT terraform:  } 2025-07-06 19:21:36.490079 | orchestrator | 19:21:36.490 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:36.490095 | orchestrator | 19:21:36.490 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:36.490106 | orchestrator | 19:21:36.490 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-06 19:21:36.490122 | orchestrator | 19:21:36.490 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:36.490134 | orchestrator | 19:21:36.490 STDOUT terraform:  } 2025-07-06 19:21:36.490145 | orchestrator | 19:21:36.490 STDOUT terraform:  } 2025-07-06 19:21:36.490160 | orchestrator | 19:21:36.490 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-06 19:21:36.490214 | orchestrator | 19:21:36.490 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:36.490873 | orchestrator | 19:21:36.490 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:36.490903 | orchestrator | 19:21:36.490 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:36.490915 | orchestrator | 19:21:36.490 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:36.490927 | orchestrator | 19:21:36.490 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.490938 | orchestrator | 19:21:36.490 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:36.490949 | orchestrator | 19:21:36.490 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:36.490960 | orchestrator | 19:21:36.490 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:36.490970 | orchestrator | 19:21:36.490 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:36.490981 | orchestrator | 19:21:36.490 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.490991 | orchestrator | 19:21:36.490 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:36.491001 | orchestrator | 19:21:36.490 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:36.491011 | orchestrator | 19:21:36.490 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:36.491034 | orchestrator | 19:21:36.490 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:36.491044 | orchestrator | 19:21:36.490 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.491055 | orchestrator | 19:21:36.490 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:36.491065 | orchestrator | 19:21:36.490 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.491076 | orchestrator | 19:21:36.490 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.491087 | orchestrator | 19:21:36.490 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:36.491097 | orchestrator | 19:21:36.490 STDOUT terraform:  } 2025-07-06 19:21:36.491108 | orchestrator | 19:21:36.490 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.491118 | orchestrator | 19:21:36.490 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:36.491128 | orchestrator | 19:21:36.490 STDOUT terraform:  } 2025-07-06 19:21:36.491138 | orchestrator | 19:21:36.490 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.491147 | orchestrator | 19:21:36.490 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:36.491158 | orchestrator | 19:21:36.490 STDOUT terraform:  } 2025-07-06 19:21:36.491174 | orchestrator | 19:21:36.490 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.491184 | orchestrator | 19:21:36.490 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:36.491194 | orchestrator | 19:21:36.490 STDOUT terraform:  } 2025-07-06 19:21:36.491205 | orchestrator | 19:21:36.490 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:36.491215 | orchestrator | 19:21:36.490 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:36.491225 | orchestrator | 19:21:36.490 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-06 19:21:36.491235 | orchestrator | 19:21:36.490 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:36.491246 | orchestrator | 19:21:36.491 STDOUT terraform:  } 2025-07-06 19:21:36.491256 | orchestrator | 19:21:36.491 STDOUT terraform:  } 2025-07-06 19:21:36.491267 | orchestrator | 19:21:36.491 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-06 19:21:36.491277 | orchestrator | 19:21:36.491 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-06 19:21:36.491288 | orchestrator | 19:21:36.491 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:36.491302 | orchestrator | 19:21:36.491 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-06 19:21:36.491312 | orchestrator | 19:21:36.491 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-06 19:21:36.491323 | orchestrator | 19:21:36.491 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.491337 | orchestrator | 19:21:36.491 STDOUT terraform:  + device_id = (known after apply) 2025-07-06 19:21:36.494076 | orchestrator | 19:21:36.491 STDOUT terraform:  + device_owner = (known after apply) 2025-07-06 19:21:36.494117 | orchestrator | 19:21:36.491 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-06 19:21:36.494151 | orchestrator | 19:21:36.491 STDOUT terraform:  + dns_name = (known after apply) 2025-07-06 19:21:36.494162 | orchestrator | 19:21:36.491 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.494172 | orchestrator | 19:21:36.491 STDOUT terraform:  + mac_address = (known after apply) 2025-07-06 19:21:36.494182 | orchestrator | 19:21:36.491 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:36.494193 | orchestrator | 19:21:36.491 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-06 19:21:36.494212 | orchestrator | 19:21:36.491 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-06 19:21:36.494226 | orchestrator | 19:21:36.491 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.494237 | orchestrator | 19:21:36.491 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-06 19:21:36.494247 | orchestrator | 19:21:36.491 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.494257 | orchestrator | 19:21:36.491 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.494273 | orchestrator | 19:21:36.491 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-06 19:21:36.494284 | orchestrator | 19:21:36.491 STDOUT terraform:  } 2025-07-06 19:21:36.494356 | orchestrator | 19:21:36.491 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.494363 | orchestrator | 19:21:36.491 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-06 19:21:36.494369 | orchestrator | 19:21:36.491 STDOUT terraform:  } 2025-07-06 19:21:36.494375 | orchestrator | 19:21:36.491 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.494381 | orchestrator | 19:21:36.491 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-06 19:21:36.494387 | orchestrator | 19:21:36.491 STDOUT terraform:  } 2025-07-06 19:21:36.494393 | orchestrator | 19:21:36.491 STDOUT terraform:  + allowed_address_pairs { 2025-07-06 19:21:36.494399 | orchestrator | 19:21:36.491 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-06 19:21:36.494405 | orchestrator | 19:21:36.491 STDOUT terraform:  } 2025-07-06 19:21:36.494411 | orchestrator | 19:21:36.491 STDOUT terraform:  + binding (known after apply) 2025-07-06 19:21:36.494417 | orchestrator | 19:21:36.491 STDOUT terraform:  + fixed_ip { 2025-07-06 19:21:36.494423 | orchestrator | 19:21:36.491 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-06 19:21:36.494429 | orchestrator | 19:21:36.492 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:36.494435 | orchestrator | 19:21:36.492 STDOUT terraform:  } 2025-07-06 19:21:36.494441 | orchestrator | 19:21:36.492 STDOUT terraform:  } 2025-07-06 19:21:36.494448 | orchestrator | 19:21:36.492 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-06 19:21:36.494504 | orchestrator | 19:21:36.492 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-06 19:21:36.494511 | orchestrator | 19:21:36.492 STDOUT terraform:  + force_destroy = false 2025-07-06 19:21:36.494517 | orchestrator | 19:21:36.492 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.494530 | orchestrator | 19:21:36.492 STDOUT terraform:  + port_id = (known after apply) 2025-07-06 19:21:36.494537 | orchestrator | 19:21:36.492 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.494543 | orchestrator | 19:21:36.492 STDOUT terraform:  + router_id = (known after apply) 2025-07-06 19:21:36.494549 | orchestrator | 19:21:36.492 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-06 19:21:36.494555 | orchestrator | 19:21:36.492 STDOUT terraform:  } 2025-07-06 19:21:36.494562 | orchestrator | 19:21:36.492 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-06 19:21:36.494579 | orchestrator | 19:21:36.492 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-06 19:21:36.494586 | orchestrator | 19:21:36.492 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-06 19:21:36.494592 | orchestrator | 19:21:36.492 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.494598 | orchestrator | 19:21:36.492 STDOUT terraform:  + availability_zone_hints = [ 2025-07-06 19:21:36.494607 | orchestrator | 19:21:36.492 STDOUT terraform:  + "nova", 2025-07-06 19:21:36.494614 | orchestrator | 19:21:36.492 STDOUT terraform:  ] 2025-07-06 19:21:36.494620 | orchestrator | 19:21:36.492 STDOUT terraform:  + distributed = (known after apply) 2025-07-06 19:21:36.494626 | orchestrator | 19:21:36.492 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-06 19:21:36.494632 | orchestrator | 19:21:36.492 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-06 19:21:36.494642 | orchestrator | 19:21:36.492 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-06 19:21:36.494648 | orchestrator | 19:21:36.492 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.494654 | orchestrator | 19:21:36.492 STDOUT terraform:  + name = "testbed" 2025-07-06 19:21:36.494661 | orchestrator | 19:21:36.492 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.494667 | orchestrator | 19:21:36.492 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.494673 | orchestrator | 19:21:36.492 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-06 19:21:36.494679 | orchestrator | 19:21:36.492 STDOUT terraform:  } 2025-07-06 19:21:36.494685 | orchestrator | 19:21:36.492 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-06 19:21:36.494692 | orchestrator | 19:21:36.492 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-06 19:21:36.494699 | orchestrator | 19:21:36.493 STDOUT terraform:  + description = "ssh" 2025-07-06 19:21:36.494705 | orchestrator | 19:21:36.493 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:36.494711 | orchestrator | 19:21:36.493 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:36.494717 | orchestrator | 19:21:36.493 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.494723 | orchestrator | 19:21:36.493 STDOUT terraform:  + port_range_max = 22 2025-07-06 19:21:36.494729 | orchestrator | 19:21:36.493 STDOUT terraform:  + port_range_min = 22 2025-07-06 19:21:36.494739 | orchestrator | 19:21:36.493 STDOUT terraform:  + protocol = "tcp" 2025-07-06 19:21:36.494746 | orchestrator | 19:21:36.493 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.494752 | orchestrator | 19:21:36.493 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:36.494758 | orchestrator | 19:21:36.493 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:36.494764 | orchestrator | 19:21:36.493 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:36.494770 | orchestrator | 19:21:36.493 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:36.494776 | orchestrator | 19:21:36.493 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.494781 | orchestrator | 19:21:36.493 STDOUT terraform:  } 2025-07-06 19:21:36.494787 | orchestrator | 19:21:36.493 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-06 19:21:36.494792 | orchestrator | 19:21:36.493 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-06 19:21:36.494798 | orchestrator | 19:21:36.493 STDOUT terraform:  + description = "wireguard" 2025-07-06 19:21:36.494803 | orchestrator | 19:21:36.493 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:36.494812 | orchestrator | 19:21:36.493 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:36.494818 | orchestrator | 19:21:36.493 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.494823 | orchestrator | 19:21:36.493 STDOUT terraform:  + port_range_max = 51820 2025-07-06 19:21:36.494829 | orchestrator | 19:21:36.493 STDOUT terraform:  + port_range_min = 51820 2025-07-06 19:21:36.494834 | orchestrator | 19:21:36.493 STDOUT terraform:  + protocol = "udp" 2025-07-06 19:21:36.494839 | orchestrator | 19:21:36.493 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.494845 | orchestrator | 19:21:36.493 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:36.494850 | orchestrator | 19:21:36.493 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:36.494855 | orchestrator | 19:21:36.493 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:36.494864 | orchestrator | 19:21:36.493 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:36.494869 | orchestrator | 19:21:36.493 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.494875 | orchestrator | 19:21:36.493 STDOUT terraform:  } 2025-07-06 19:21:36.494880 | orchestrator | 19:21:36.494 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-06 19:21:36.494886 | orchestrator | 19:21:36.494 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-06 19:21:36.494891 | orchestrator | 19:21:36.494 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:36.494897 | orchestrator | 19:21:36.494 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:36.494906 | orchestrator | 19:21:36.494 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.494911 | orchestrator | 19:21:36.494 STDOUT terraform:  + protocol = "tcp" 2025-07-06 19:21:36.494917 | orchestrator | 19:21:36.494 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.494922 | orchestrator | 19:21:36.494 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:36.494928 | orchestrator | 19:21:36.494 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:36.494933 | orchestrator | 19:21:36.494 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-06 19:21:36.494938 | orchestrator | 19:21:36.494 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:36.494944 | orchestrator | 19:21:36.494 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.494949 | orchestrator | 19:21:36.494 STDOUT terraform:  } 2025-07-06 19:21:36.494954 | orchestrator | 19:21:36.494 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-06 19:21:36.494960 | orchestrator | 19:21:36.494 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-06 19:21:36.494965 | orchestrator | 19:21:36.494 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:36.494970 | orchestrator | 19:21:36.494 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:36.494976 | orchestrator | 19:21:36.494 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.494981 | orchestrator | 19:21:36.494 STDOUT terraform:  + protocol = "udp" 2025-07-06 19:21:36.494987 | orchestrator | 19:21:36.494 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.494992 | orchestrator | 19:21:36.494 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:36.495000 | orchestrator | 19:21:36.494 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:36.495005 | orchestrator | 19:21:36.494 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-06 19:21:36.495011 | orchestrator | 19:21:36.494 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:36.495016 | orchestrator | 19:21:36.494 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.495021 | orchestrator | 19:21:36.494 STDOUT terraform:  } 2025-07-06 19:21:36.495027 | orchestrator | 19:21:36.494 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-06 19:21:36.495036 | orchestrator | 19:21:36.494 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-06 19:21:36.495049 | orchestrator | 19:21:36.495 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:36.495060 | orchestrator | 19:21:36.495 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:36.495101 | orchestrator | 19:21:36.495 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.495116 | orchestrator | 19:21:36.495 STDOUT terraform:  + protocol = "icmp" 2025-07-06 19:21:36.495156 | orchestrator | 19:21:36.495 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.495192 | orchestrator | 19:21:36.495 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:36.495228 | orchestrator | 19:21:36.495 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:36.495257 | orchestrator | 19:21:36.495 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:36.495353 | orchestrator | 19:21:36.495 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:36.495370 | orchestrator | 19:21:36.495 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.495379 | orchestrator | 19:21:36.495 STDOUT terraform:  } 2025-07-06 19:21:36.495391 | orchestrator | 19:21:36.495 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-06 19:21:36.495428 | orchestrator | 19:21:36.495 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-06 19:21:36.495480 | orchestrator | 19:21:36.495 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:36.495495 | orchestrator | 19:21:36.495 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:36.495533 | orchestrator | 19:21:36.495 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.495558 | orchestrator | 19:21:36.495 STDOUT terraform:  + protocol = "tcp" 2025-07-06 19:21:36.495595 | orchestrator | 19:21:36.495 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.499545 | orchestrator | 19:21:36.495 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:36.499597 | orchestrator | 19:21:36.495 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:36.499604 | orchestrator | 19:21:36.495 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:36.499611 | orchestrator | 19:21:36.495 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:36.499617 | orchestrator | 19:21:36.495 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.499623 | orchestrator | 19:21:36.495 STDOUT terraform:  } 2025-07-06 19:21:36.499629 | orchestrator | 19:21:36.495 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-06 19:21:36.499636 | orchestrator | 19:21:36.495 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-06 19:21:36.499642 | orchestrator | 19:21:36.495 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:36.499648 | orchestrator | 19:21:36.495 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:36.499653 | orchestrator | 19:21:36.495 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.499660 | orchestrator | 19:21:36.495 STDOUT terraform:  + protocol = "udp" 2025-07-06 19:21:36.499666 | orchestrator | 19:21:36.496 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.499671 | orchestrator | 19:21:36.496 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:36.499676 | orchestrator | 19:21:36.496 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:36.499692 | orchestrator | 19:21:36.496 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:36.499697 | orchestrator | 19:21:36.496 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:36.499702 | orchestrator | 19:21:36.496 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.499707 | orchestrator | 19:21:36.496 STDOUT terraform:  } 2025-07-06 19:21:36.499712 | orchestrator | 19:21:36.496 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-06 19:21:36.499717 | orchestrator | 19:21:36.496 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-06 19:21:36.499721 | orchestrator | 19:21:36.496 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:36.499730 | orchestrator | 19:21:36.496 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:36.499735 | orchestrator | 19:21:36.496 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.499739 | orchestrator | 19:21:36.496 STDOUT terraform:  + protocol = "icmp" 2025-07-06 19:21:36.499744 | orchestrator | 19:21:36.496 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.499749 | orchestrator | 19:21:36.496 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:36.499754 | orchestrator | 19:21:36.496 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:36.499758 | orchestrator | 19:21:36.496 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:36.499763 | orchestrator | 19:21:36.496 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:36.499768 | orchestrator | 19:21:36.496 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.499773 | orchestrator | 19:21:36.496 STDOUT terraform:  } 2025-07-06 19:21:36.499778 | orchestrator | 19:21:36.496 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-06 19:21:36.499783 | orchestrator | 19:21:36.496 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-06 19:21:36.499788 | orchestrator | 19:21:36.496 STDOUT terraform:  + description = "vrrp" 2025-07-06 19:21:36.499793 | orchestrator | 19:21:36.496 STDOUT terraform:  + direction = "ingress" 2025-07-06 19:21:36.499803 | orchestrator | 19:21:36.496 STDOUT terraform:  + ethertype = "IPv4" 2025-07-06 19:21:36.499808 | orchestrator | 19:21:36.496 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.499813 | orchestrator | 19:21:36.496 STDOUT terraform:  + protocol = "112" 2025-07-06 19:21:36.499818 | orchestrator | 19:21:36.496 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.499823 | orchestrator | 19:21:36.497 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-06 19:21:36.499828 | orchestrator | 19:21:36.497 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-06 19:21:36.499833 | orchestrator | 19:21:36.497 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-06 19:21:36.499841 | orchestrator | 19:21:36.497 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-06 19:21:36.499846 | orchestrator | 19:21:36.497 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.499851 | orchestrator | 19:21:36.497 STDOUT terraform:  } 2025-07-06 19:21:36.499856 | orchestrator | 19:21:36.497 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-06 19:21:36.499861 | orchestrator | 19:21:36.497 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-06 19:21:36.499866 | orchestrator | 19:21:36.497 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.499875 | orchestrator | 19:21:36.497 STDOUT terraform:  + description = "management security group" 2025-07-06 19:21:36.499880 | orchestrator | 19:21:36.497 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.499884 | orchestrator | 19:21:36.497 STDOUT terraform:  + name = "testbed-management" 2025-07-06 19:21:36.499889 | orchestrator | 19:21:36.497 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.499894 | orchestrator | 19:21:36.497 STDOUT terraform:  + stateful = (known after apply) 2025-07-06 19:21:36.499899 | orchestrator | 19:21:36.497 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.499904 | orchestrator | 19:21:36.497 STDOUT terraform:  } 2025-07-06 19:21:36.499909 | orchestrator | 19:21:36.497 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-06 19:21:36.499913 | orchestrator | 19:21:36.497 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-06 19:21:36.499921 | orchestrator | 19:21:36.497 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.499926 | orchestrator | 19:21:36.497 STDOUT terraform:  + description = "node security group" 2025-07-06 19:21:36.499931 | orchestrator | 19:21:36.497 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.499936 | orchestrator | 19:21:36.497 STDOUT terraform:  + name = "testbed-node" 2025-07-06 19:21:36.499941 | orchestrator | 19:21:36.497 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.499946 | orchestrator | 19:21:36.497 STDOUT terraform:  + stateful = (known after apply) 2025-07-06 19:21:36.499950 | orchestrator | 19:21:36.497 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.499955 | orchestrator | 19:21:36.497 STDOUT terraform:  } 2025-07-06 19:21:36.499960 | orchestrator | 19:21:36.497 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-06 19:21:36.499965 | orchestrator | 19:21:36.497 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-06 19:21:36.499970 | orchestrator | 19:21:36.497 STDOUT terraform:  + all_tags = (known after apply) 2025-07-06 19:21:36.499975 | orchestrator | 19:21:36.497 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-06 19:21:36.499980 | orchestrator | 19:21:36.497 STDOUT terraform:  + dns_nameservers = [ 2025-07-06 19:21:36.499985 | orchestrator | 19:21:36.497 STDOUT terraform:  + "8.8.8.8", 2025-07-06 19:21:36.499990 | orchestrator | 19:21:36.498 STDOUT terraform:  + "9.9.9.9", 2025-07-06 19:21:36.499998 | orchestrator | 19:21:36.498 STDOUT terraform:  ] 2025-07-06 19:21:36.500006 | orchestrator | 19:21:36.498 STDOUT terraform:  + enable_dhcp = true 2025-07-06 19:21:36.500011 | orchestrator | 19:21:36.498 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-06 19:21:36.500016 | orchestrator | 19:21:36.498 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.500021 | orchestrator | 19:21:36.498 STDOUT terraform:  + ip_version = 4 2025-07-06 19:21:36.500026 | orchestrator | 19:21:36.498 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-06 19:21:36.500030 | orchestrator | 19:21:36.498 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-06 19:21:36.500035 | orchestrator | 19:21:36.498 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-06 19:21:36.500040 | orchestrator | 19:21:36.498 STDOUT terraform:  + network_id = (known after apply) 2025-07-06 19:21:36.500045 | orchestrator | 19:21:36.498 STDOUT terraform:  + no_gateway = false 2025-07-06 19:21:36.500050 | orchestrator | 19:21:36.498 STDOUT terraform:  + region = (known after apply) 2025-07-06 19:21:36.500054 | orchestrator | 19:21:36.498 STDOUT terraform:  + service_types = (known after apply) 2025-07-06 19:21:36.500059 | orchestrator | 19:21:36.498 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-06 19:21:36.500064 | orchestrator | 19:21:36.498 STDOUT terraform:  + allocation_pool { 2025-07-06 19:21:36.500069 | orchestrator | 19:21:36.498 STDOUT terraform:  + end = "192.168.31.250" 2025-07-06 19:21:36.500074 | orchestrator | 19:21:36.498 STDOUT terraform:  + start = "192.168.31.200" 2025-07-06 19:21:36.500079 | orchestrator | 19:21:36.498 STDOUT terraform:  } 2025-07-06 19:21:36.500084 | orchestrator | 19:21:36.498 STDOUT terraform:  } 2025-07-06 19:21:36.500089 | orchestrator | 19:21:36.498 STDOUT terraform:  # terraform_data.image will be created 2025-07-06 19:21:36.500094 | orchestrator | 19:21:36.498 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-06 19:21:36.500098 | orchestrator | 19:21:36.498 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.500103 | orchestrator | 19:21:36.498 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-06 19:21:36.500108 | orchestrator | 19:21:36.498 STDOUT terraform:  + output = (known after apply) 2025-07-06 19:21:36.500113 | orchestrator | 19:21:36.498 STDOUT terraform:  } 2025-07-06 19:21:36.500118 | orchestrator | 19:21:36.498 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-06 19:21:36.500123 | orchestrator | 19:21:36.498 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-06 19:21:36.500127 | orchestrator | 19:21:36.498 STDOUT terraform:  + id = (known after apply) 2025-07-06 19:21:36.500132 | orchestrator | 19:21:36.498 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-07-06 19:21:36.500137 | orchestrator | 19:21:36.498 STDOUT terraform:  + output = (known after apply) 2025-07-06 19:21:36.500142 | orchestrator | 19:21:36.498 STDOUT terraform:  } 2025-07-06 19:21:36.500147 | orchestrator | 19:21:36.498 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-06 19:21:36.500151 | orchestrator | 19:21:36.498 STDOUT terraform: Changes to Outputs: 2025-07-06 19:21:36.500159 | orchestrator | 19:21:36.498 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-06 19:21:36.500164 | orchestrator | 19:21:36.498 STDOUT terraform:  + private_key = (sensitive value) 2025-07-06 19:21:36.673199 | orchestrator | 19:21:36.672 STDOUT terraform: terraform_data.image: Creating... 2025-07-06 19:21:36.673288 | orchestrator | 19:21:36.672 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-06 19:21:36.673611 | orchestrator | 19:21:36.673 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=27fdc3c5-cae4-6917-0945-eac275bed0cb] 2025-07-06 19:21:36.674091 | orchestrator | 19:21:36.673 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=1e340317-cf47-0d16-dc4b-000e05329ba6] 2025-07-06 19:21:36.691144 | orchestrator | 19:21:36.690 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-06 19:21:36.691858 | orchestrator | 19:21:36.691 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-06 19:21:36.694612 | orchestrator | 19:21:36.694 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-06 19:21:36.703009 | orchestrator | 19:21:36.701 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-06 19:21:36.703068 | orchestrator | 19:21:36.702 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-06 19:21:36.703246 | orchestrator | 19:21:36.703 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-06 19:21:36.704943 | orchestrator | 19:21:36.704 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-06 19:21:36.705287 | orchestrator | 19:21:36.705 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-06 19:21:36.706165 | orchestrator | 19:21:36.705 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-06 19:21:36.711801 | orchestrator | 19:21:36.711 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-06 19:21:37.170707 | orchestrator | 19:21:37.170 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-06 19:21:37.171203 | orchestrator | 19:21:37.171 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-07-06 19:21:37.177487 | orchestrator | 19:21:37.177 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-06 19:21:37.179981 | orchestrator | 19:21:37.179 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-06 19:21:37.249715 | orchestrator | 19:21:37.249 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-07-06 19:21:37.260361 | orchestrator | 19:21:37.260 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-06 19:21:42.694517 | orchestrator | 19:21:42.694 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=52c77de7-fa99-41ed-9c83-55c21b47d583] 2025-07-06 19:21:42.708222 | orchestrator | 19:21:42.707 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-06 19:21:46.703103 | orchestrator | 19:21:46.702 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-07-06 19:21:46.705090 | orchestrator | 19:21:46.704 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-07-06 19:21:46.705283 | orchestrator | 19:21:46.704 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-07-06 19:21:46.707147 | orchestrator | 19:21:46.706 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-07-06 19:21:46.707266 | orchestrator | 19:21:46.707 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-07-06 19:21:46.712164 | orchestrator | 19:21:46.712 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-07-06 19:21:47.178571 | orchestrator | 19:21:47.178 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-07-06 19:21:47.180511 | orchestrator | 19:21:47.180 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-07-06 19:21:47.261819 | orchestrator | 19:21:47.261 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-07-06 19:21:47.276172 | orchestrator | 19:21:47.275 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=4a0eaf3f-1395-4073-9878-c6e703eff332] 2025-07-06 19:21:47.289549 | orchestrator | 19:21:47.288 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=fd99b70f-8aa3-4e15-8e66-07a34fe10111] 2025-07-06 19:21:47.291320 | orchestrator | 19:21:47.291 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-06 19:21:47.292872 | orchestrator | 19:21:47.292 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-06 19:21:47.298820 | orchestrator | 19:21:47.298 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=29aeef2c-15f7-4912-be6e-922934b043d5] 2025-07-06 19:21:47.303728 | orchestrator | 19:21:47.303 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-06 19:21:47.305105 | orchestrator | 19:21:47.304 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=e42fce45-67a3-477c-881f-6db38785a929] 2025-07-06 19:21:47.310006 | orchestrator | 19:21:47.309 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-06 19:21:47.326235 | orchestrator | 19:21:47.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=1751cfdb-b4ca-4b06-9fa0-b986eec2737a] 2025-07-06 19:21:47.326890 | orchestrator | 19:21:47.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=3c29cd91-58e9-42ce-8653-990321e9d76b] 2025-07-06 19:21:47.336245 | orchestrator | 19:21:47.336 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-06 19:21:47.342904 | orchestrator | 19:21:47.342 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-06 19:21:47.349661 | orchestrator | 19:21:47.349 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=1e8ef4f0982bdbc64b7afdbdd20bb37e9f54d5e7] 2025-07-06 19:21:47.354931 | orchestrator | 19:21:47.354 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-06 19:21:47.381014 | orchestrator | 19:21:47.380 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=28d32b1f-54bf-4890-9371-a2140c9d3e48] 2025-07-06 19:21:47.395365 | orchestrator | 19:21:47.395 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-06 19:21:47.404844 | orchestrator | 19:21:47.404 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=c523d18d-f688-4547-bb4c-d63e44be8719] 2025-07-06 19:21:47.408815 | orchestrator | 19:21:47.408 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=4a3695d44c96a043590123d8864a7e8d0adbe40b] 2025-07-06 19:21:47.414138 | orchestrator | 19:21:47.413 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-06 19:21:47.452973 | orchestrator | 19:21:47.452 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=0c9a7d91-c8fc-48f8-acad-853231e255dd] 2025-07-06 19:21:52.708961 | orchestrator | 19:21:52.708 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-07-06 19:21:53.023936 | orchestrator | 19:21:53.023 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=7b3a8a43-ab47-4c60-b4f9-72cd5d7ea920] 2025-07-06 19:21:53.305395 | orchestrator | 19:21:53.305 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=bd0ca3c0-949e-4d9d-8ff2-030737bddb4c] 2025-07-06 19:21:53.314593 | orchestrator | 19:21:53.314 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-06 19:21:57.293930 | orchestrator | 19:21:57.292 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-07-06 19:21:57.294749 | orchestrator | 19:21:57.294 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-07-06 19:21:57.305254 | orchestrator | 19:21:57.304 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-07-06 19:21:57.311502 | orchestrator | 19:21:57.311 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-07-06 19:21:57.337802 | orchestrator | 19:21:57.337 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-07-06 19:21:57.356130 | orchestrator | 19:21:57.355 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-07-06 19:21:57.701161 | orchestrator | 19:21:57.700 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8] 2025-07-06 19:21:57.934506 | orchestrator | 19:21:57.934 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=8571afc0-e036-46a5-988a-49c98e90c838] 2025-07-06 19:21:57.934682 | orchestrator | 19:21:57.934 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=9a47642e-d74b-47af-9cfb-c13ee6345465] 2025-07-06 19:21:57.939952 | orchestrator | 19:21:57.939 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=df244d07-90ba-451b-8ce4-5a19b3d2e3c9] 2025-07-06 19:21:57.941833 | orchestrator | 19:21:57.941 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=ff95bacd-4fbb-4999-b5b7-64f4756d9376] 2025-07-06 19:21:57.942320 | orchestrator | 19:21:57.942 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=8627e713-83d6-40b5-b9ad-70826e27e3e5] 2025-07-06 19:22:00.857609 | orchestrator | 19:22:00.857 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=6f97db7d-b6fc-4b52-9eda-f6d96c3dce4f] 2025-07-06 19:22:00.869251 | orchestrator | 19:22:00.869 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-06 19:22:00.871761 | orchestrator | 19:22:00.871 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-06 19:22:00.872556 | orchestrator | 19:22:00.872 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-06 19:22:01.100110 | orchestrator | 19:22:01.099 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=a9f396ba-b61b-4f7f-8f75-f033f7181335] 2025-07-06 19:22:01.104672 | orchestrator | 19:22:01.104 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=dd364d14-5cc1-4c4f-8961-4647f44167ed] 2025-07-06 19:22:01.110525 | orchestrator | 19:22:01.110 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-06 19:22:01.112532 | orchestrator | 19:22:01.112 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-06 19:22:01.114616 | orchestrator | 19:22:01.114 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-06 19:22:01.115645 | orchestrator | 19:22:01.115 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-06 19:22:01.116091 | orchestrator | 19:22:01.115 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-06 19:22:01.116286 | orchestrator | 19:22:01.116 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-06 19:22:01.116522 | orchestrator | 19:22:01.116 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-06 19:22:01.116679 | orchestrator | 19:22:01.116 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-06 19:22:01.122460 | orchestrator | 19:22:01.122 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-06 19:22:01.272618 | orchestrator | 19:22:01.272 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=08923490-9570-4918-a9fc-d406a5fa96d1] 2025-07-06 19:22:01.287026 | orchestrator | 19:22:01.286 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-06 19:22:01.380949 | orchestrator | 19:22:01.380 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=0a19412b-2426-4ba1-b911-5b1abf8d1f6f] 2025-07-06 19:22:01.388984 | orchestrator | 19:22:01.388 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-06 19:22:01.538663 | orchestrator | 19:22:01.538 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=06e723c6-3189-4d78-bd8c-fda168659e60] 2025-07-06 19:22:01.555665 | orchestrator | 19:22:01.555 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-06 19:22:01.565264 | orchestrator | 19:22:01.565 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=a33b9add-b62c-4e3f-ba97-17383da2937b] 2025-07-06 19:22:01.575347 | orchestrator | 19:22:01.575 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-06 19:22:01.796288 | orchestrator | 19:22:01.795 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=eab996f9-f19b-49c3-8442-0b327b547e42] 2025-07-06 19:22:01.815359 | orchestrator | 19:22:01.815 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=519951ea-f5d6-4f2e-ba5b-f53e968a834a] 2025-07-06 19:22:01.816254 | orchestrator | 19:22:01.816 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-06 19:22:01.827197 | orchestrator | 19:22:01.827 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-06 19:22:01.963677 | orchestrator | 19:22:01.963 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=1454eab4-4066-4cd8-b218-278a899629b2] 2025-07-06 19:22:01.979620 | orchestrator | 19:22:01.979 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-06 19:22:02.013964 | orchestrator | 19:22:02.013 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=0e825452-69f9-42e5-9c0c-dc89b4b5a327] 2025-07-06 19:22:02.144668 | orchestrator | 19:22:02.144 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=46056721-05ab-452c-9a69-b32e7930f0e1] 2025-07-06 19:22:06.949816 | orchestrator | 19:22:06.949 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=b31e8764-d178-46eb-beb4-8194960cf3db] 2025-07-06 19:22:07.051716 | orchestrator | 19:22:07.051 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=83774561-1211-4569-934c-75ba01f5ff89] 2025-07-06 19:22:07.485526 | orchestrator | 19:22:07.485 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=1b15a983-45d2-4505-b26e-8bc5d5455803] 2025-07-06 19:22:07.559739 | orchestrator | 19:22:07.559 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=8b1d40d3-be5f-4ee9-a914-8a48bf2275d8] 2025-07-06 19:22:07.560788 | orchestrator | 19:22:07.560 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=1af235d5-cff2-4638-a00c-81caa85f8f48] 2025-07-06 19:22:07.648118 | orchestrator | 19:22:07.647 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=53b1d9ff-3d32-4691-b0b3-2d366bc85eb2] 2025-07-06 19:22:07.708151 | orchestrator | 19:22:07.707 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=97a380d2-ebdb-4194-b554-8088dc8c6bce] 2025-07-06 19:22:08.409959 | orchestrator | 19:22:08.409 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=f028d4df-c0d3-4ffe-b7ef-aeddc21608ad] 2025-07-06 19:22:08.426629 | orchestrator | 19:22:08.426 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-06 19:22:08.445399 | orchestrator | 19:22:08.445 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-06 19:22:08.448470 | orchestrator | 19:22:08.448 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-06 19:22:08.460677 | orchestrator | 19:22:08.460 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-06 19:22:08.468349 | orchestrator | 19:22:08.468 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-06 19:22:08.469817 | orchestrator | 19:22:08.469 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-06 19:22:08.471801 | orchestrator | 19:22:08.471 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-06 19:22:14.766181 | orchestrator | 19:22:14.765 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=35645e13-a495-4547-89d6-984d5e7e3210] 2025-07-06 19:22:14.781270 | orchestrator | 19:22:14.781 STDOUT terraform: local_file.inventory: Creating... 2025-07-06 19:22:14.783419 | orchestrator | 19:22:14.783 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-06 19:22:14.786138 | orchestrator | 19:22:14.785 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=204545b258a6a879fda0d911623a157f7d38978e] 2025-07-06 19:22:14.788887 | orchestrator | 19:22:14.788 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-06 19:22:14.789537 | orchestrator | 19:22:14.789 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=295da426ed02727ced3d20e8bcee9674eac3f567] 2025-07-06 19:22:15.489113 | orchestrator | 19:22:15.488 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=35645e13-a495-4547-89d6-984d5e7e3210] 2025-07-06 19:22:18.452689 | orchestrator | 19:22:18.452 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-06 19:22:18.452802 | orchestrator | 19:22:18.452 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-06 19:22:18.470950 | orchestrator | 19:22:18.470 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-06 19:22:18.471034 | orchestrator | 19:22:18.470 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-06 19:22:18.471200 | orchestrator | 19:22:18.470 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-06 19:22:18.478115 | orchestrator | 19:22:18.477 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-06 19:22:28.455781 | orchestrator | 19:22:28.455 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-06 19:22:28.455956 | orchestrator | 19:22:28.455 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-06 19:22:28.471122 | orchestrator | 19:22:28.470 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-06 19:22:28.471252 | orchestrator | 19:22:28.471 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-06 19:22:28.471274 | orchestrator | 19:22:28.471 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-06 19:22:28.478620 | orchestrator | 19:22:28.478 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-06 19:22:29.004598 | orchestrator | 19:22:29.004 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=fe6823d2-b8c1-48c5-b0c4-3542e97731c0] 2025-07-06 19:22:38.459468 | orchestrator | 19:22:38.459 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-06 19:22:38.471567 | orchestrator | 19:22:38.471 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-07-06 19:22:38.471657 | orchestrator | 19:22:38.471 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-06 19:22:38.471684 | orchestrator | 19:22:38.471 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-07-06 19:22:38.478630 | orchestrator | 19:22:38.478 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-06 19:22:39.292339 | orchestrator | 19:22:39.292 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=1350aec9-56c3-41cb-a91d-f0fc9848c10a] 2025-07-06 19:22:39.391827 | orchestrator | 19:22:39.391 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=ad5bcc77-29a9-41ab-89fb-dfb14e2bc50e] 2025-07-06 19:22:48.474846 | orchestrator | 19:22:48.474 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-07-06 19:22:48.474971 | orchestrator | 19:22:48.474 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-07-06 19:22:48.479018 | orchestrator | 19:22:48.478 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-07-06 19:22:49.292315 | orchestrator | 19:22:49.291 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=26bbcbc4-1b60-41dd-8f65-c0c8392dbd1d] 2025-07-06 19:22:49.401833 | orchestrator | 19:22:49.401 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 41s [id=a26851d9-9143-4a2f-a926-9da7928bfe91] 2025-07-06 19:22:49.739369 | orchestrator | 19:22:49.738 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 42s [id=71dc50c3-26f5-4780-b106-c9370307108e] 2025-07-06 19:22:49.779905 | orchestrator | 19:22:49.779 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-06 19:22:49.782103 | orchestrator | 19:22:49.781 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-06 19:22:49.786713 | orchestrator | 19:22:49.786 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-06 19:22:49.786762 | orchestrator | 19:22:49.786 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-06 19:22:49.786799 | orchestrator | 19:22:49.786 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-06 19:22:49.790651 | orchestrator | 19:22:49.790 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=3512379086170909969] 2025-07-06 19:22:49.791304 | orchestrator | 19:22:49.791 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-06 19:22:49.794928 | orchestrator | 19:22:49.792 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-06 19:22:49.794981 | orchestrator | 19:22:49.793 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-06 19:22:49.798370 | orchestrator | 19:22:49.798 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-06 19:22:49.806090 | orchestrator | 19:22:49.805 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-06 19:22:49.822109 | orchestrator | 19:22:49.818 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-06 19:22:55.100910 | orchestrator | 19:22:55.100 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=ad5bcc77-29a9-41ab-89fb-dfb14e2bc50e/0c9a7d91-c8fc-48f8-acad-853231e255dd] 2025-07-06 19:22:55.101662 | orchestrator | 19:22:55.101 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=fe6823d2-b8c1-48c5-b0c4-3542e97731c0/28d32b1f-54bf-4890-9371-a2140c9d3e48] 2025-07-06 19:22:55.115652 | orchestrator | 19:22:55.115 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=26bbcbc4-1b60-41dd-8f65-c0c8392dbd1d/29aeef2c-15f7-4912-be6e-922934b043d5] 2025-07-06 19:22:55.135076 | orchestrator | 19:22:55.134 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=ad5bcc77-29a9-41ab-89fb-dfb14e2bc50e/fd99b70f-8aa3-4e15-8e66-07a34fe10111] 2025-07-06 19:22:55.136587 | orchestrator | 19:22:55.136 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=26bbcbc4-1b60-41dd-8f65-c0c8392dbd1d/1751cfdb-b4ca-4b06-9fa0-b986eec2737a] 2025-07-06 19:22:55.145064 | orchestrator | 19:22:55.144 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=fe6823d2-b8c1-48c5-b0c4-3542e97731c0/e42fce45-67a3-477c-881f-6db38785a929] 2025-07-06 19:22:55.203848 | orchestrator | 19:22:55.203 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=26bbcbc4-1b60-41dd-8f65-c0c8392dbd1d/4a0eaf3f-1395-4073-9878-c6e703eff332] 2025-07-06 19:22:55.209516 | orchestrator | 19:22:55.209 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=fe6823d2-b8c1-48c5-b0c4-3542e97731c0/c523d18d-f688-4547-bb4c-d63e44be8719] 2025-07-06 19:22:55.236145 | orchestrator | 19:22:55.235 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=ad5bcc77-29a9-41ab-89fb-dfb14e2bc50e/3c29cd91-58e9-42ce-8653-990321e9d76b] 2025-07-06 19:22:59.823791 | orchestrator | 19:22:59.823 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-06 19:23:09.823768 | orchestrator | 19:23:09.823 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-06 19:23:10.561076 | orchestrator | 19:23:10.560 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=7bacfefc-8472-44b3-aff6-1b69b3b3eb0f] 2025-07-06 19:23:10.583383 | orchestrator | 19:23:10.583 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-06 19:23:10.583482 | orchestrator | 19:23:10.583 STDOUT terraform: Outputs: 2025-07-06 19:23:10.583495 | orchestrator | 19:23:10.583 STDOUT terraform: manager_address = 2025-07-06 19:23:10.583512 | orchestrator | 19:23:10.583 STDOUT terraform: private_key = 2025-07-06 19:23:10.684486 | orchestrator | ok: Runtime: 0:01:44.076188 2025-07-06 19:23:10.706636 | 2025-07-06 19:23:10.706762 | TASK [Create infrastructure (stable)] 2025-07-06 19:23:11.240356 | orchestrator | skipping: Conditional result was False 2025-07-06 19:23:11.256344 | 2025-07-06 19:23:11.256498 | TASK [Fetch manager address] 2025-07-06 19:23:11.693083 | orchestrator | ok 2025-07-06 19:23:11.703306 | 2025-07-06 19:23:11.703430 | TASK [Set manager_host address] 2025-07-06 19:23:11.784722 | orchestrator | ok 2025-07-06 19:23:11.796314 | 2025-07-06 19:23:11.796464 | LOOP [Update ansible collections] 2025-07-06 19:23:12.647691 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-06 19:23:12.648054 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-06 19:23:12.648112 | orchestrator | Starting galaxy collection install process 2025-07-06 19:23:12.648153 | orchestrator | Process install dependency map 2025-07-06 19:23:12.648190 | orchestrator | Starting collection install process 2025-07-06 19:23:12.648238 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2025-07-06 19:23:12.648280 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2025-07-06 19:23:12.648321 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-06 19:23:12.648397 | orchestrator | ok: Item: commons Runtime: 0:00:00.521916 2025-07-06 19:23:13.493266 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-06 19:23:13.493431 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-06 19:23:13.493486 | orchestrator | Starting galaxy collection install process 2025-07-06 19:23:13.493527 | orchestrator | Process install dependency map 2025-07-06 19:23:13.493566 | orchestrator | Starting collection install process 2025-07-06 19:23:13.493652 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2025-07-06 19:23:13.493689 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2025-07-06 19:23:13.493723 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-06 19:23:13.493779 | orchestrator | ok: Item: services Runtime: 0:00:00.587585 2025-07-06 19:23:13.516031 | 2025-07-06 19:23:13.516194 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-06 19:23:24.076234 | orchestrator | ok 2025-07-06 19:23:24.088214 | 2025-07-06 19:23:24.088344 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-06 19:24:24.136055 | orchestrator | ok 2025-07-06 19:24:24.147963 | 2025-07-06 19:24:24.148083 | TASK [Fetch manager ssh hostkey] 2025-07-06 19:24:25.722804 | orchestrator | Output suppressed because no_log was given 2025-07-06 19:24:25.739693 | 2025-07-06 19:24:25.739906 | TASK [Get ssh keypair from terraform environment] 2025-07-06 19:24:26.285805 | orchestrator | ok: Runtime: 0:00:00.011667 2025-07-06 19:24:26.303609 | 2025-07-06 19:24:26.303762 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-06 19:24:26.346246 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-06 19:24:26.358476 | 2025-07-06 19:24:26.358635 | TASK [Run manager part 0] 2025-07-06 19:24:27.347563 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-06 19:24:27.394157 | orchestrator | 2025-07-06 19:24:27.394205 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-06 19:24:27.394213 | orchestrator | 2025-07-06 19:24:27.394226 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-06 19:24:29.660112 | orchestrator | ok: [testbed-manager] 2025-07-06 19:24:29.660167 | orchestrator | 2025-07-06 19:24:29.660189 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-06 19:24:29.660200 | orchestrator | 2025-07-06 19:24:29.660210 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:24:31.533982 | orchestrator | ok: [testbed-manager] 2025-07-06 19:24:31.534047 | orchestrator | 2025-07-06 19:24:31.534054 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-06 19:24:32.217744 | orchestrator | ok: [testbed-manager] 2025-07-06 19:24:32.217837 | orchestrator | 2025-07-06 19:24:32.217847 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-06 19:24:32.274938 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:32.274997 | orchestrator | 2025-07-06 19:24:32.275010 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-06 19:24:32.303844 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:32.303898 | orchestrator | 2025-07-06 19:24:32.303908 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-06 19:24:32.340429 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:32.340484 | orchestrator | 2025-07-06 19:24:32.340492 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-06 19:24:32.371420 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:32.371468 | orchestrator | 2025-07-06 19:24:32.371475 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-06 19:24:32.402506 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:32.402562 | orchestrator | 2025-07-06 19:24:32.402571 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-06 19:24:32.433256 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:32.433301 | orchestrator | 2025-07-06 19:24:32.433309 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-06 19:24:32.458948 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:24:32.459012 | orchestrator | 2025-07-06 19:24:32.459025 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-06 19:24:33.295743 | orchestrator | changed: [testbed-manager] 2025-07-06 19:24:33.295802 | orchestrator | 2025-07-06 19:24:33.295811 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-06 19:27:41.202968 | orchestrator | changed: [testbed-manager] 2025-07-06 19:27:41.203027 | orchestrator | 2025-07-06 19:27:41.203035 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-06 19:28:57.568368 | orchestrator | changed: [testbed-manager] 2025-07-06 19:28:57.568507 | orchestrator | 2025-07-06 19:28:57.568525 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-06 19:29:17.305730 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:17.305881 | orchestrator | 2025-07-06 19:29:17.305911 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-06 19:29:25.991905 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:25.992009 | orchestrator | 2025-07-06 19:29:25.992024 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-06 19:29:26.039221 | orchestrator | ok: [testbed-manager] 2025-07-06 19:29:26.039286 | orchestrator | 2025-07-06 19:29:26.039294 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-06 19:29:26.833640 | orchestrator | ok: [testbed-manager] 2025-07-06 19:29:26.833703 | orchestrator | 2025-07-06 19:29:26.833714 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-06 19:29:27.609445 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:27.609543 | orchestrator | 2025-07-06 19:29:27.609559 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-06 19:29:33.936560 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:33.936663 | orchestrator | 2025-07-06 19:29:33.936706 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-06 19:29:39.886152 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:39.886418 | orchestrator | 2025-07-06 19:29:39.886442 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-06 19:29:42.567184 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:42.567856 | orchestrator | 2025-07-06 19:29:42.567883 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-06 19:29:44.356102 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:44.356191 | orchestrator | 2025-07-06 19:29:44.356206 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-06 19:29:45.508876 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-06 19:29:45.509000 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-06 19:29:45.509027 | orchestrator | 2025-07-06 19:29:45.509041 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-06 19:29:45.553056 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-06 19:29:45.553158 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-06 19:29:45.553185 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-06 19:29:45.553207 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-06 19:29:48.760706 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-06 19:29:48.760799 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-06 19:29:48.760814 | orchestrator | 2025-07-06 19:29:48.760827 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-06 19:29:49.332056 | orchestrator | changed: [testbed-manager] 2025-07-06 19:29:49.332148 | orchestrator | 2025-07-06 19:29:49.332163 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-06 19:32:25.891230 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-06 19:32:25.891332 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-06 19:32:25.891350 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-06 19:32:25.891363 | orchestrator | 2025-07-06 19:32:25.891375 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-06 19:32:28.217919 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-06 19:32:28.217996 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-06 19:32:28.218010 | orchestrator | 2025-07-06 19:32:28.218081 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-06 19:32:28.218095 | orchestrator | 2025-07-06 19:32:28.218106 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:32:29.632205 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:29.632314 | orchestrator | 2025-07-06 19:32:29.632340 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-06 19:32:29.682915 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:29.682997 | orchestrator | 2025-07-06 19:32:29.683011 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-06 19:32:29.760509 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:29.760604 | orchestrator | 2025-07-06 19:32:29.760621 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-06 19:32:30.555040 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:30.555125 | orchestrator | 2025-07-06 19:32:30.555145 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-06 19:32:31.331804 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:31.331895 | orchestrator | 2025-07-06 19:32:31.331911 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-06 19:32:32.724229 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-07-06 19:32:32.724319 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-07-06 19:32:32.724335 | orchestrator | 2025-07-06 19:32:32.724362 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-06 19:32:34.116745 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:34.116867 | orchestrator | 2025-07-06 19:32:34.116884 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-06 19:32:35.881561 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:32:35.881656 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-06 19:32:35.881671 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:32:35.881683 | orchestrator | 2025-07-06 19:32:35.881696 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-06 19:32:35.943129 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:35.943216 | orchestrator | 2025-07-06 19:32:35.943232 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-06 19:32:36.504561 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:36.504658 | orchestrator | 2025-07-06 19:32:36.504677 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-06 19:32:36.578651 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:36.578715 | orchestrator | 2025-07-06 19:32:36.578721 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-06 19:32:37.438098 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:32:37.438698 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:37.438721 | orchestrator | 2025-07-06 19:32:37.438733 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-06 19:32:37.471519 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:37.471577 | orchestrator | 2025-07-06 19:32:37.471585 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-06 19:32:37.499624 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:37.499690 | orchestrator | 2025-07-06 19:32:37.499698 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-06 19:32:37.530101 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:37.530157 | orchestrator | 2025-07-06 19:32:37.530164 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-06 19:32:37.582711 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:37.582837 | orchestrator | 2025-07-06 19:32:37.582866 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-06 19:32:38.317571 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:38.317659 | orchestrator | 2025-07-06 19:32:38.317675 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-06 19:32:38.317689 | orchestrator | 2025-07-06 19:32:38.317700 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:32:39.705046 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:39.705115 | orchestrator | 2025-07-06 19:32:39.705131 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-06 19:32:40.671061 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:40.671130 | orchestrator | 2025-07-06 19:32:40.671144 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:32:40.671157 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-06 19:32:40.671169 | orchestrator | 2025-07-06 19:32:41.203007 | orchestrator | ok: Runtime: 0:08:14.096024 2025-07-06 19:32:41.224757 | 2025-07-06 19:32:41.224932 | TASK [Point out that the log in on the manager is now possible] 2025-07-06 19:32:41.271390 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-06 19:32:41.279462 | 2025-07-06 19:32:41.279574 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-06 19:32:41.324265 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-06 19:32:41.333794 | 2025-07-06 19:32:41.333920 | TASK [Run manager part 1 + 2] 2025-07-06 19:32:42.181246 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-06 19:32:42.237393 | orchestrator | 2025-07-06 19:32:42.237468 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-06 19:32:42.237476 | orchestrator | 2025-07-06 19:32:42.237489 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:32:45.251380 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:45.251447 | orchestrator | 2025-07-06 19:32:45.251468 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-06 19:32:45.282649 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:45.282714 | orchestrator | 2025-07-06 19:32:45.282724 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-06 19:32:45.326123 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:45.326195 | orchestrator | 2025-07-06 19:32:45.326211 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-06 19:32:45.365362 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:45.365415 | orchestrator | 2025-07-06 19:32:45.365441 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-06 19:32:45.430781 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:45.430839 | orchestrator | 2025-07-06 19:32:45.430850 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-06 19:32:45.492639 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:45.492699 | orchestrator | 2025-07-06 19:32:45.492710 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-06 19:32:45.539048 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-06 19:32:45.539101 | orchestrator | 2025-07-06 19:32:45.539108 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-06 19:32:46.290986 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:46.291045 | orchestrator | 2025-07-06 19:32:46.291055 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-06 19:32:46.346492 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:32:46.346552 | orchestrator | 2025-07-06 19:32:46.346562 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-06 19:32:47.735977 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:47.736041 | orchestrator | 2025-07-06 19:32:47.736052 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-06 19:32:48.323598 | orchestrator | ok: [testbed-manager] 2025-07-06 19:32:48.323660 | orchestrator | 2025-07-06 19:32:48.323670 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-06 19:32:49.469637 | orchestrator | changed: [testbed-manager] 2025-07-06 19:32:49.469714 | orchestrator | 2025-07-06 19:32:49.469734 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-06 19:33:02.794462 | orchestrator | changed: [testbed-manager] 2025-07-06 19:33:02.794593 | orchestrator | 2025-07-06 19:33:02.794611 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-06 19:33:03.482543 | orchestrator | ok: [testbed-manager] 2025-07-06 19:33:03.482584 | orchestrator | 2025-07-06 19:33:03.482594 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-06 19:33:03.537739 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:33:03.537780 | orchestrator | 2025-07-06 19:33:03.537789 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-06 19:33:04.502375 | orchestrator | changed: [testbed-manager] 2025-07-06 19:33:04.502466 | orchestrator | 2025-07-06 19:33:04.502505 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-06 19:33:05.488723 | orchestrator | changed: [testbed-manager] 2025-07-06 19:33:05.488827 | orchestrator | 2025-07-06 19:33:05.488844 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-06 19:33:06.064440 | orchestrator | changed: [testbed-manager] 2025-07-06 19:33:06.064571 | orchestrator | 2025-07-06 19:33:06.064588 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-06 19:33:06.098556 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-06 19:33:06.098665 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-06 19:33:06.098681 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-06 19:33:06.098694 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-06 19:33:08.068940 | orchestrator | changed: [testbed-manager] 2025-07-06 19:33:08.069013 | orchestrator | 2025-07-06 19:33:08.069023 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-06 19:33:16.983562 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-06 19:33:16.983836 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-06 19:33:16.983861 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-06 19:33:16.983871 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-06 19:33:16.983888 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-06 19:33:16.983898 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-06 19:33:16.983907 | orchestrator | 2025-07-06 19:33:16.983917 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-06 19:33:18.066927 | orchestrator | changed: [testbed-manager] 2025-07-06 19:33:18.066974 | orchestrator | 2025-07-06 19:33:18.066984 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-06 19:33:18.110672 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:33:18.110713 | orchestrator | 2025-07-06 19:33:18.110721 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-06 19:33:21.306579 | orchestrator | changed: [testbed-manager] 2025-07-06 19:33:21.306679 | orchestrator | 2025-07-06 19:33:21.306700 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-06 19:33:21.347569 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:33:21.347659 | orchestrator | 2025-07-06 19:33:21.347676 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-06 19:34:55.714919 | orchestrator | changed: [testbed-manager] 2025-07-06 19:34:55.715099 | orchestrator | 2025-07-06 19:34:55.715123 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-06 19:34:56.821243 | orchestrator | ok: [testbed-manager] 2025-07-06 19:34:56.821280 | orchestrator | 2025-07-06 19:34:56.821287 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:34:56.821294 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-06 19:34:56.821299 | orchestrator | 2025-07-06 19:34:56.971280 | orchestrator | ok: Runtime: 0:02:15.252123 2025-07-06 19:34:56.990107 | 2025-07-06 19:34:56.990292 | TASK [Reboot manager] 2025-07-06 19:34:58.535010 | orchestrator | ok: Runtime: 0:00:01.024842 2025-07-06 19:34:58.552041 | 2025-07-06 19:34:58.552235 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-06 19:35:15.000160 | orchestrator | ok 2025-07-06 19:35:15.017987 | 2025-07-06 19:35:15.018154 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-06 19:36:15.061106 | orchestrator | ok 2025-07-06 19:36:15.071555 | 2025-07-06 19:36:15.071705 | TASK [Deploy manager + bootstrap nodes] 2025-07-06 19:36:17.604797 | orchestrator | 2025-07-06 19:36:17.605020 | orchestrator | # DEPLOY MANAGER 2025-07-06 19:36:17.605046 | orchestrator | 2025-07-06 19:36:17.605061 | orchestrator | + set -e 2025-07-06 19:36:17.605074 | orchestrator | + echo 2025-07-06 19:36:17.605088 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-06 19:36:17.605105 | orchestrator | + echo 2025-07-06 19:36:17.605157 | orchestrator | + cat /opt/manager-vars.sh 2025-07-06 19:36:17.607936 | orchestrator | export NUMBER_OF_NODES=6 2025-07-06 19:36:17.607973 | orchestrator | 2025-07-06 19:36:17.607985 | orchestrator | export CEPH_VERSION=reef 2025-07-06 19:36:17.607998 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-06 19:36:17.608011 | orchestrator | export MANAGER_VERSION=latest 2025-07-06 19:36:17.608033 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-06 19:36:17.608044 | orchestrator | 2025-07-06 19:36:17.608062 | orchestrator | export ARA=false 2025-07-06 19:36:17.608073 | orchestrator | export DEPLOY_MODE=manager 2025-07-06 19:36:17.608091 | orchestrator | export TEMPEST=false 2025-07-06 19:36:17.608102 | orchestrator | export IS_ZUUL=true 2025-07-06 19:36:17.608113 | orchestrator | 2025-07-06 19:36:17.608130 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 19:36:17.608142 | orchestrator | export EXTERNAL_API=false 2025-07-06 19:36:17.608153 | orchestrator | 2025-07-06 19:36:17.608163 | orchestrator | export IMAGE_USER=ubuntu 2025-07-06 19:36:17.608177 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-06 19:36:17.608188 | orchestrator | 2025-07-06 19:36:17.608199 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-06 19:36:17.608374 | orchestrator | 2025-07-06 19:36:17.608393 | orchestrator | + echo 2025-07-06 19:36:17.608407 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 19:36:17.608996 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 19:36:17.609019 | orchestrator | ++ INTERACTIVE=false 2025-07-06 19:36:17.609052 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 19:36:17.609064 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 19:36:17.609150 | orchestrator | + source /opt/manager-vars.sh 2025-07-06 19:36:17.609189 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-06 19:36:17.609212 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-06 19:36:17.609247 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-06 19:36:17.609259 | orchestrator | ++ CEPH_VERSION=reef 2025-07-06 19:36:17.609332 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-06 19:36:17.609356 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-06 19:36:17.609395 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-06 19:36:17.609407 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-06 19:36:17.609418 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-06 19:36:17.609439 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-06 19:36:17.609462 | orchestrator | ++ export ARA=false 2025-07-06 19:36:17.609474 | orchestrator | ++ ARA=false 2025-07-06 19:36:17.609485 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-06 19:36:17.609496 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-06 19:36:17.609506 | orchestrator | ++ export TEMPEST=false 2025-07-06 19:36:17.609517 | orchestrator | ++ TEMPEST=false 2025-07-06 19:36:17.609528 | orchestrator | ++ export IS_ZUUL=true 2025-07-06 19:36:17.609539 | orchestrator | ++ IS_ZUUL=true 2025-07-06 19:36:17.609550 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 19:36:17.609560 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 19:36:17.609571 | orchestrator | ++ export EXTERNAL_API=false 2025-07-06 19:36:17.609582 | orchestrator | ++ EXTERNAL_API=false 2025-07-06 19:36:17.609592 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-06 19:36:17.609603 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-06 19:36:17.609614 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-06 19:36:17.609624 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-06 19:36:17.609636 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-06 19:36:17.609646 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-06 19:36:17.609657 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-06 19:36:17.667959 | orchestrator | + docker version 2025-07-06 19:36:17.946161 | orchestrator | Client: Docker Engine - Community 2025-07-06 19:36:17.946303 | orchestrator | Version: 27.5.1 2025-07-06 19:36:17.946333 | orchestrator | API version: 1.47 2025-07-06 19:36:17.946352 | orchestrator | Go version: go1.22.11 2025-07-06 19:36:17.946370 | orchestrator | Git commit: 9f9e405 2025-07-06 19:36:17.946389 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-06 19:36:17.946408 | orchestrator | OS/Arch: linux/amd64 2025-07-06 19:36:17.946427 | orchestrator | Context: default 2025-07-06 19:36:17.946446 | orchestrator | 2025-07-06 19:36:17.946461 | orchestrator | Server: Docker Engine - Community 2025-07-06 19:36:17.946473 | orchestrator | Engine: 2025-07-06 19:36:17.946485 | orchestrator | Version: 27.5.1 2025-07-06 19:36:17.946495 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-06 19:36:17.946539 | orchestrator | Go version: go1.22.11 2025-07-06 19:36:17.946559 | orchestrator | Git commit: 4c9b3b0 2025-07-06 19:36:17.946576 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-06 19:36:17.946594 | orchestrator | OS/Arch: linux/amd64 2025-07-06 19:36:17.946611 | orchestrator | Experimental: false 2025-07-06 19:36:17.946629 | orchestrator | containerd: 2025-07-06 19:36:17.946648 | orchestrator | Version: 1.7.27 2025-07-06 19:36:17.946667 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-06 19:36:17.946687 | orchestrator | runc: 2025-07-06 19:36:17.946705 | orchestrator | Version: 1.2.5 2025-07-06 19:36:17.946721 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-06 19:36:17.946732 | orchestrator | docker-init: 2025-07-06 19:36:17.946743 | orchestrator | Version: 0.19.0 2025-07-06 19:36:17.946755 | orchestrator | GitCommit: de40ad0 2025-07-06 19:36:17.948822 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-06 19:36:17.957960 | orchestrator | + set -e 2025-07-06 19:36:17.958008 | orchestrator | + source /opt/manager-vars.sh 2025-07-06 19:36:17.958060 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-06 19:36:17.958073 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-06 19:36:17.958084 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-06 19:36:17.958094 | orchestrator | ++ CEPH_VERSION=reef 2025-07-06 19:36:17.958105 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-06 19:36:17.958117 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-06 19:36:17.958127 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-06 19:36:17.958138 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-06 19:36:17.958149 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-06 19:36:17.958160 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-06 19:36:17.958171 | orchestrator | ++ export ARA=false 2025-07-06 19:36:17.958182 | orchestrator | ++ ARA=false 2025-07-06 19:36:17.958192 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-06 19:36:17.958203 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-06 19:36:17.958214 | orchestrator | ++ export TEMPEST=false 2025-07-06 19:36:17.958225 | orchestrator | ++ TEMPEST=false 2025-07-06 19:36:17.958236 | orchestrator | ++ export IS_ZUUL=true 2025-07-06 19:36:17.958246 | orchestrator | ++ IS_ZUUL=true 2025-07-06 19:36:17.958257 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 19:36:17.958269 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 19:36:17.958279 | orchestrator | ++ export EXTERNAL_API=false 2025-07-06 19:36:17.958290 | orchestrator | ++ EXTERNAL_API=false 2025-07-06 19:36:17.958301 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-06 19:36:17.958311 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-06 19:36:17.958322 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-06 19:36:17.958333 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-06 19:36:17.958344 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-06 19:36:17.958355 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-06 19:36:17.958365 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 19:36:17.958376 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 19:36:17.958387 | orchestrator | ++ INTERACTIVE=false 2025-07-06 19:36:17.958397 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 19:36:17.958414 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 19:36:17.958426 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-06 19:36:17.958436 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-06 19:36:17.958447 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-07-06 19:36:17.965310 | orchestrator | + set -e 2025-07-06 19:36:17.965344 | orchestrator | + VERSION=reef 2025-07-06 19:36:17.966548 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-06 19:36:17.971889 | orchestrator | + [[ -n ceph_version: reef ]] 2025-07-06 19:36:17.971921 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-07-06 19:36:17.978281 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-07-06 19:36:17.985075 | orchestrator | + set -e 2025-07-06 19:36:17.985099 | orchestrator | + VERSION=2024.2 2025-07-06 19:36:17.986175 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-07-06 19:36:17.989286 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-07-06 19:36:17.989309 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-07-06 19:36:17.994692 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-06 19:36:17.995752 | orchestrator | ++ semver latest 7.0.0 2025-07-06 19:36:18.054252 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-06 19:36:18.054355 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-06 19:36:18.054370 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-06 19:36:18.054384 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-06 19:36:18.141170 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-06 19:36:18.142188 | orchestrator | + source /opt/venv/bin/activate 2025-07-06 19:36:18.143459 | orchestrator | ++ deactivate nondestructive 2025-07-06 19:36:18.143507 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:36:18.143528 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:36:18.143545 | orchestrator | ++ hash -r 2025-07-06 19:36:18.143557 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:36:18.143572 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-06 19:36:18.143587 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-06 19:36:18.143602 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-06 19:36:18.143826 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-06 19:36:18.143888 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-06 19:36:18.143917 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-06 19:36:18.143936 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-06 19:36:18.143966 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-06 19:36:18.143990 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-06 19:36:18.144047 | orchestrator | ++ export PATH 2025-07-06 19:36:18.144199 | orchestrator | ++ '[' -n '' ']' 2025-07-06 19:36:18.144235 | orchestrator | ++ '[' -z '' ']' 2025-07-06 19:36:18.144253 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-06 19:36:18.144298 | orchestrator | ++ PS1='(venv) ' 2025-07-06 19:36:18.144362 | orchestrator | ++ export PS1 2025-07-06 19:36:18.144382 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-06 19:36:18.144397 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-06 19:36:18.144519 | orchestrator | ++ hash -r 2025-07-06 19:36:18.144834 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-06 19:36:19.317501 | orchestrator | 2025-07-06 19:36:19.317637 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-06 19:36:19.317664 | orchestrator | 2025-07-06 19:36:19.317685 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-06 19:36:19.851068 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:19.851175 | orchestrator | 2025-07-06 19:36:19.851190 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-06 19:36:20.807732 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:20.807840 | orchestrator | 2025-07-06 19:36:20.807907 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-06 19:36:20.807920 | orchestrator | 2025-07-06 19:36:20.807932 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:36:23.231217 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:23.231328 | orchestrator | 2025-07-06 19:36:23.231341 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-06 19:36:23.281800 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:23.281958 | orchestrator | 2025-07-06 19:36:23.281979 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-06 19:36:23.748113 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:23.748207 | orchestrator | 2025-07-06 19:36:23.748220 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-06 19:36:23.789721 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:23.789821 | orchestrator | 2025-07-06 19:36:23.789834 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-06 19:36:24.124198 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:24.124303 | orchestrator | 2025-07-06 19:36:24.124318 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-06 19:36:24.167947 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:24.168058 | orchestrator | 2025-07-06 19:36:24.168073 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-06 19:36:24.505644 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:24.505725 | orchestrator | 2025-07-06 19:36:24.505733 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-06 19:36:24.620949 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:24.621045 | orchestrator | 2025-07-06 19:36:24.621060 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-06 19:36:24.621073 | orchestrator | 2025-07-06 19:36:24.621087 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:36:26.445943 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:26.446050 | orchestrator | 2025-07-06 19:36:26.446057 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-06 19:36:26.547455 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-06 19:36:26.547557 | orchestrator | 2025-07-06 19:36:26.547572 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-06 19:36:26.609952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-06 19:36:26.610172 | orchestrator | 2025-07-06 19:36:26.610203 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-06 19:36:27.773979 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-06 19:36:27.774203 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-06 19:36:27.774224 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-06 19:36:27.774236 | orchestrator | 2025-07-06 19:36:27.774249 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-06 19:36:29.627568 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-06 19:36:29.627701 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-06 19:36:29.627758 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-06 19:36:29.627773 | orchestrator | 2025-07-06 19:36:29.627786 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-06 19:36:30.265064 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:36:30.265166 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:30.265182 | orchestrator | 2025-07-06 19:36:30.265196 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-06 19:36:30.962151 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:36:30.962248 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:30.962260 | orchestrator | 2025-07-06 19:36:30.962268 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-06 19:36:31.015292 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:31.015389 | orchestrator | 2025-07-06 19:36:31.015404 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-06 19:36:31.419961 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:31.420062 | orchestrator | 2025-07-06 19:36:31.420077 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-06 19:36:31.497403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-06 19:36:31.497506 | orchestrator | 2025-07-06 19:36:31.497522 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-06 19:36:32.611262 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:32.611370 | orchestrator | 2025-07-06 19:36:32.611388 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-06 19:36:33.445676 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:33.445802 | orchestrator | 2025-07-06 19:36:33.445819 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-06 19:36:45.309506 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:45.309631 | orchestrator | 2025-07-06 19:36:45.309649 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-06 19:36:45.366378 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:45.366457 | orchestrator | 2025-07-06 19:36:45.366466 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-06 19:36:45.366475 | orchestrator | 2025-07-06 19:36:45.366481 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:36:47.233468 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:47.233575 | orchestrator | 2025-07-06 19:36:47.233622 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-06 19:36:47.352451 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-06 19:36:47.352577 | orchestrator | 2025-07-06 19:36:47.352603 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-06 19:36:47.408166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-06 19:36:47.408265 | orchestrator | 2025-07-06 19:36:47.408280 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-06 19:36:50.004536 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:50.004660 | orchestrator | 2025-07-06 19:36:50.004676 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-06 19:36:50.051200 | orchestrator | ok: [testbed-manager] 2025-07-06 19:36:50.051298 | orchestrator | 2025-07-06 19:36:50.051316 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-06 19:36:50.183474 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-06 19:36:50.183573 | orchestrator | 2025-07-06 19:36:50.183588 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-06 19:36:53.049210 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-06 19:36:53.049318 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-06 19:36:53.049332 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-06 19:36:53.049345 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-06 19:36:53.049356 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-06 19:36:53.049367 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-06 19:36:53.049378 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-06 19:36:53.049390 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-06 19:36:53.049401 | orchestrator | 2025-07-06 19:36:53.049413 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-06 19:36:53.716552 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:53.716654 | orchestrator | 2025-07-06 19:36:53.716670 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-06 19:36:54.399097 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:54.399205 | orchestrator | 2025-07-06 19:36:54.399221 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-06 19:36:54.484326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-06 19:36:54.484429 | orchestrator | 2025-07-06 19:36:54.484445 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-06 19:36:55.700478 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-06 19:36:55.700557 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-06 19:36:55.700564 | orchestrator | 2025-07-06 19:36:55.700569 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-06 19:36:56.326237 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:56.326349 | orchestrator | 2025-07-06 19:36:56.326366 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-06 19:36:56.385656 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:56.385750 | orchestrator | 2025-07-06 19:36:56.385765 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-06 19:36:56.445150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-06 19:36:56.445247 | orchestrator | 2025-07-06 19:36:56.445262 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-06 19:36:57.863371 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:36:57.863476 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:36:57.863492 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:57.863505 | orchestrator | 2025-07-06 19:36:57.863518 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-06 19:36:58.512550 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:58.512657 | orchestrator | 2025-07-06 19:36:58.512673 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-06 19:36:58.572072 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:36:58.572186 | orchestrator | 2025-07-06 19:36:58.572201 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-06 19:36:58.660190 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-06 19:36:58.660287 | orchestrator | 2025-07-06 19:36:58.660302 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-06 19:36:59.196365 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:59.196473 | orchestrator | 2025-07-06 19:36:59.196489 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-06 19:36:59.603786 | orchestrator | changed: [testbed-manager] 2025-07-06 19:36:59.603982 | orchestrator | 2025-07-06 19:36:59.604001 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-06 19:37:00.876242 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-06 19:37:00.876350 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-06 19:37:00.876362 | orchestrator | 2025-07-06 19:37:00.876371 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-06 19:37:01.508238 | orchestrator | changed: [testbed-manager] 2025-07-06 19:37:01.508352 | orchestrator | 2025-07-06 19:37:01.508369 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-06 19:37:01.899492 | orchestrator | ok: [testbed-manager] 2025-07-06 19:37:01.899593 | orchestrator | 2025-07-06 19:37:01.899608 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-06 19:37:02.246969 | orchestrator | changed: [testbed-manager] 2025-07-06 19:37:02.247077 | orchestrator | 2025-07-06 19:37:02.247093 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-06 19:37:02.286627 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:37:02.286718 | orchestrator | 2025-07-06 19:37:02.286732 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-06 19:37:02.350187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-06 19:37:02.350277 | orchestrator | 2025-07-06 19:37:02.350291 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-06 19:37:02.402421 | orchestrator | ok: [testbed-manager] 2025-07-06 19:37:02.402510 | orchestrator | 2025-07-06 19:37:02.402523 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-06 19:37:04.420239 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-06 19:37:04.420358 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-06 19:37:04.420382 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-06 19:37:04.420401 | orchestrator | 2025-07-06 19:37:04.420421 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-06 19:37:05.164701 | orchestrator | changed: [testbed-manager] 2025-07-06 19:37:05.164805 | orchestrator | 2025-07-06 19:37:05.164819 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-06 19:37:05.872997 | orchestrator | changed: [testbed-manager] 2025-07-06 19:37:05.873103 | orchestrator | 2025-07-06 19:37:05.873119 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-06 19:37:06.606526 | orchestrator | changed: [testbed-manager] 2025-07-06 19:37:06.606646 | orchestrator | 2025-07-06 19:37:06.606666 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-06 19:37:06.688848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-06 19:37:06.688994 | orchestrator | 2025-07-06 19:37:06.689011 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-06 19:37:06.748236 | orchestrator | ok: [testbed-manager] 2025-07-06 19:37:06.748327 | orchestrator | 2025-07-06 19:37:06.748341 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-06 19:37:07.477215 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-06 19:37:07.477318 | orchestrator | 2025-07-06 19:37:07.477333 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-06 19:37:07.566152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-06 19:37:07.566249 | orchestrator | 2025-07-06 19:37:07.566264 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-06 19:37:08.268254 | orchestrator | changed: [testbed-manager] 2025-07-06 19:37:08.268356 | orchestrator | 2025-07-06 19:37:08.268372 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-06 19:37:08.873265 | orchestrator | ok: [testbed-manager] 2025-07-06 19:37:08.873343 | orchestrator | 2025-07-06 19:37:08.873351 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-06 19:37:08.929903 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:37:08.930091 | orchestrator | 2025-07-06 19:37:08.930104 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-06 19:37:08.992535 | orchestrator | ok: [testbed-manager] 2025-07-06 19:37:08.992643 | orchestrator | 2025-07-06 19:37:08.992659 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-06 19:37:09.845142 | orchestrator | changed: [testbed-manager] 2025-07-06 19:37:09.845253 | orchestrator | 2025-07-06 19:37:09.845270 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-06 19:38:15.378635 | orchestrator | changed: [testbed-manager] 2025-07-06 19:38:15.378756 | orchestrator | 2025-07-06 19:38:15.378773 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-06 19:38:16.295819 | orchestrator | ok: [testbed-manager] 2025-07-06 19:38:16.295943 | orchestrator | 2025-07-06 19:38:16.295969 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-06 19:38:16.341143 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:38:16.341239 | orchestrator | 2025-07-06 19:38:16.341253 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-06 19:38:19.002241 | orchestrator | changed: [testbed-manager] 2025-07-06 19:38:19.002335 | orchestrator | 2025-07-06 19:38:19.002343 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-06 19:38:19.049056 | orchestrator | ok: [testbed-manager] 2025-07-06 19:38:19.049169 | orchestrator | 2025-07-06 19:38:19.049188 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-06 19:38:19.049201 | orchestrator | 2025-07-06 19:38:19.049213 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-06 19:38:19.097547 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:38:19.097651 | orchestrator | 2025-07-06 19:38:19.097667 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-06 19:39:19.146875 | orchestrator | Pausing for 60 seconds 2025-07-06 19:39:19.146992 | orchestrator | changed: [testbed-manager] 2025-07-06 19:39:19.147009 | orchestrator | 2025-07-06 19:39:19.147022 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-06 19:39:23.326918 | orchestrator | changed: [testbed-manager] 2025-07-06 19:39:23.327027 | orchestrator | 2025-07-06 19:39:23.327044 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-06 19:40:05.078578 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-06 19:40:05.078664 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-06 19:40:05.078672 | orchestrator | changed: [testbed-manager] 2025-07-06 19:40:05.078679 | orchestrator | 2025-07-06 19:40:05.078685 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-06 19:40:14.336243 | orchestrator | changed: [testbed-manager] 2025-07-06 19:40:14.336389 | orchestrator | 2025-07-06 19:40:14.336409 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-06 19:40:14.427731 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-06 19:40:14.427866 | orchestrator | 2025-07-06 19:40:14.427882 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-06 19:40:14.427895 | orchestrator | 2025-07-06 19:40:14.427907 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-06 19:40:14.484138 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:40:14.484279 | orchestrator | 2025-07-06 19:40:14.484296 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:40:14.484309 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-06 19:40:14.484321 | orchestrator | 2025-07-06 19:40:14.580283 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-06 19:40:14.580379 | orchestrator | + deactivate 2025-07-06 19:40:14.580394 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-06 19:40:14.580408 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-06 19:40:14.580419 | orchestrator | + export PATH 2025-07-06 19:40:14.580430 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-06 19:40:14.580441 | orchestrator | + '[' -n '' ']' 2025-07-06 19:40:14.580452 | orchestrator | + hash -r 2025-07-06 19:40:14.580462 | orchestrator | + '[' -n '' ']' 2025-07-06 19:40:14.580473 | orchestrator | + unset VIRTUAL_ENV 2025-07-06 19:40:14.580484 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-06 19:40:14.580495 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-06 19:40:14.580505 | orchestrator | + unset -f deactivate 2025-07-06 19:40:14.580517 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-06 19:40:14.590243 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-06 19:40:14.590317 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-06 19:40:14.590340 | orchestrator | + local max_attempts=60 2025-07-06 19:40:14.590353 | orchestrator | + local name=ceph-ansible 2025-07-06 19:40:14.590364 | orchestrator | + local attempt_num=1 2025-07-06 19:40:14.590536 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:40:14.618070 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:40:14.618141 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-06 19:40:14.618155 | orchestrator | + local max_attempts=60 2025-07-06 19:40:14.618167 | orchestrator | + local name=kolla-ansible 2025-07-06 19:40:14.618178 | orchestrator | + local attempt_num=1 2025-07-06 19:40:14.618798 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-06 19:40:14.657614 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:40:14.657704 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-06 19:40:14.657714 | orchestrator | + local max_attempts=60 2025-07-06 19:40:14.657723 | orchestrator | + local name=osism-ansible 2025-07-06 19:40:14.657731 | orchestrator | + local attempt_num=1 2025-07-06 19:40:14.658923 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-06 19:40:14.695931 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:40:14.696011 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-06 19:40:14.696025 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-06 19:40:15.413261 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-06 19:40:15.628436 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-06 19:40:15.628540 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-06 19:40:15.628557 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-06 19:40:15.628569 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-06 19:40:15.628583 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-06 19:40:15.628616 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-06 19:40:15.628643 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-06 19:40:15.628654 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-07-06 19:40:15.628665 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-06 19:40:15.628676 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-06 19:40:15.628686 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-06 19:40:15.628697 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-06 19:40:15.628708 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-06 19:40:15.628719 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-06 19:40:15.630204 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-06 19:40:15.638670 | orchestrator | ++ semver latest 7.0.0 2025-07-06 19:40:15.682574 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-06 19:40:15.682697 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-06 19:40:15.682716 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-06 19:40:15.685619 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-06 19:40:27.889128 | orchestrator | 2025-07-06 19:40:27 | INFO  | Task 593b1b27-9677-48ca-a95e-7cd751cd0554 (resolvconf) was prepared for execution. 2025-07-06 19:40:27.889283 | orchestrator | 2025-07-06 19:40:27 | INFO  | It takes a moment until task 593b1b27-9677-48ca-a95e-7cd751cd0554 (resolvconf) has been started and output is visible here. 2025-07-06 19:40:41.378733 | orchestrator | 2025-07-06 19:40:41.378866 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-06 19:40:41.378880 | orchestrator | 2025-07-06 19:40:41.378888 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:40:41.378894 | orchestrator | Sunday 06 July 2025 19:40:31 +0000 (0:00:00.148) 0:00:00.148 *********** 2025-07-06 19:40:41.378901 | orchestrator | ok: [testbed-manager] 2025-07-06 19:40:41.378909 | orchestrator | 2025-07-06 19:40:41.378917 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-06 19:40:41.378925 | orchestrator | Sunday 06 July 2025 19:40:35 +0000 (0:00:03.673) 0:00:03.821 *********** 2025-07-06 19:40:41.378963 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:40:41.378972 | orchestrator | 2025-07-06 19:40:41.378982 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-06 19:40:41.378989 | orchestrator | Sunday 06 July 2025 19:40:35 +0000 (0:00:00.066) 0:00:03.887 *********** 2025-07-06 19:40:41.379014 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-06 19:40:41.379022 | orchestrator | 2025-07-06 19:40:41.379028 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-06 19:40:41.379034 | orchestrator | Sunday 06 July 2025 19:40:35 +0000 (0:00:00.091) 0:00:03.979 *********** 2025-07-06 19:40:41.379040 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-06 19:40:41.379046 | orchestrator | 2025-07-06 19:40:41.379052 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-06 19:40:41.379058 | orchestrator | Sunday 06 July 2025 19:40:35 +0000 (0:00:00.083) 0:00:04.063 *********** 2025-07-06 19:40:41.379064 | orchestrator | ok: [testbed-manager] 2025-07-06 19:40:41.379070 | orchestrator | 2025-07-06 19:40:41.379076 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-06 19:40:41.379082 | orchestrator | Sunday 06 July 2025 19:40:36 +0000 (0:00:01.089) 0:00:05.152 *********** 2025-07-06 19:40:41.379087 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:40:41.379093 | orchestrator | 2025-07-06 19:40:41.379099 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-06 19:40:41.379105 | orchestrator | Sunday 06 July 2025 19:40:36 +0000 (0:00:00.063) 0:00:05.216 *********** 2025-07-06 19:40:41.379110 | orchestrator | ok: [testbed-manager] 2025-07-06 19:40:41.379116 | orchestrator | 2025-07-06 19:40:41.379122 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-06 19:40:41.379128 | orchestrator | Sunday 06 July 2025 19:40:37 +0000 (0:00:00.482) 0:00:05.698 *********** 2025-07-06 19:40:41.379134 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:40:41.379140 | orchestrator | 2025-07-06 19:40:41.379146 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-06 19:40:41.379154 | orchestrator | Sunday 06 July 2025 19:40:37 +0000 (0:00:00.087) 0:00:05.786 *********** 2025-07-06 19:40:41.379162 | orchestrator | changed: [testbed-manager] 2025-07-06 19:40:41.379168 | orchestrator | 2025-07-06 19:40:41.379174 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-06 19:40:41.379180 | orchestrator | Sunday 06 July 2025 19:40:37 +0000 (0:00:00.507) 0:00:06.293 *********** 2025-07-06 19:40:41.379186 | orchestrator | changed: [testbed-manager] 2025-07-06 19:40:41.379192 | orchestrator | 2025-07-06 19:40:41.379197 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-06 19:40:41.379203 | orchestrator | Sunday 06 July 2025 19:40:38 +0000 (0:00:01.106) 0:00:07.399 *********** 2025-07-06 19:40:41.379209 | orchestrator | ok: [testbed-manager] 2025-07-06 19:40:41.379215 | orchestrator | 2025-07-06 19:40:41.379254 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-06 19:40:41.379263 | orchestrator | Sunday 06 July 2025 19:40:39 +0000 (0:00:00.945) 0:00:08.345 *********** 2025-07-06 19:40:41.379270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-06 19:40:41.379277 | orchestrator | 2025-07-06 19:40:41.379284 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-06 19:40:41.379291 | orchestrator | Sunday 06 July 2025 19:40:39 +0000 (0:00:00.086) 0:00:08.431 *********** 2025-07-06 19:40:41.379298 | orchestrator | changed: [testbed-manager] 2025-07-06 19:40:41.379304 | orchestrator | 2025-07-06 19:40:41.379322 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:40:41.379331 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 19:40:41.379338 | orchestrator | 2025-07-06 19:40:41.379344 | orchestrator | 2025-07-06 19:40:41.379351 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:40:41.379368 | orchestrator | Sunday 06 July 2025 19:40:41 +0000 (0:00:01.135) 0:00:09.566 *********** 2025-07-06 19:40:41.379374 | orchestrator | =============================================================================== 2025-07-06 19:40:41.379380 | orchestrator | Gathering Facts --------------------------------------------------------- 3.67s 2025-07-06 19:40:41.379387 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2025-07-06 19:40:41.379393 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.11s 2025-07-06 19:40:41.379401 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.09s 2025-07-06 19:40:41.379407 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2025-07-06 19:40:41.379414 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2025-07-06 19:40:41.379439 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.48s 2025-07-06 19:40:41.379446 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-07-06 19:40:41.379451 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-07-06 19:40:41.379458 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2025-07-06 19:40:41.379464 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-07-06 19:40:41.379470 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-07-06 19:40:41.379476 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-07-06 19:40:41.639844 | orchestrator | + osism apply sshconfig 2025-07-06 19:40:53.595967 | orchestrator | 2025-07-06 19:40:53 | INFO  | Task 33595244-3637-4da9-83a3-6e031a933a6c (sshconfig) was prepared for execution. 2025-07-06 19:40:53.596090 | orchestrator | 2025-07-06 19:40:53 | INFO  | It takes a moment until task 33595244-3637-4da9-83a3-6e031a933a6c (sshconfig) has been started and output is visible here. 2025-07-06 19:41:05.196492 | orchestrator | 2025-07-06 19:41:05.196637 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-06 19:41:05.196664 | orchestrator | 2025-07-06 19:41:05.196683 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-06 19:41:05.196703 | orchestrator | Sunday 06 July 2025 19:40:57 +0000 (0:00:00.164) 0:00:00.164 *********** 2025-07-06 19:41:05.196722 | orchestrator | ok: [testbed-manager] 2025-07-06 19:41:05.196743 | orchestrator | 2025-07-06 19:41:05.196762 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-06 19:41:05.196781 | orchestrator | Sunday 06 July 2025 19:40:58 +0000 (0:00:00.581) 0:00:00.745 *********** 2025-07-06 19:41:05.196798 | orchestrator | changed: [testbed-manager] 2025-07-06 19:41:05.196810 | orchestrator | 2025-07-06 19:41:05.196821 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-06 19:41:05.196832 | orchestrator | Sunday 06 July 2025 19:40:58 +0000 (0:00:00.525) 0:00:01.271 *********** 2025-07-06 19:41:05.196843 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-06 19:41:05.196854 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-06 19:41:05.196866 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-06 19:41:05.196876 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-06 19:41:05.196887 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-06 19:41:05.196898 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-06 19:41:05.196910 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-06 19:41:05.196921 | orchestrator | 2025-07-06 19:41:05.196932 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-06 19:41:05.196943 | orchestrator | Sunday 06 July 2025 19:41:04 +0000 (0:00:05.678) 0:00:06.949 *********** 2025-07-06 19:41:05.197000 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:41:05.197015 | orchestrator | 2025-07-06 19:41:05.197027 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-06 19:41:05.197041 | orchestrator | Sunday 06 July 2025 19:41:04 +0000 (0:00:00.071) 0:00:07.021 *********** 2025-07-06 19:41:05.197054 | orchestrator | changed: [testbed-manager] 2025-07-06 19:41:05.197067 | orchestrator | 2025-07-06 19:41:05.197079 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:41:05.197094 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:41:05.197107 | orchestrator | 2025-07-06 19:41:05.197120 | orchestrator | 2025-07-06 19:41:05.197132 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:41:05.197142 | orchestrator | Sunday 06 July 2025 19:41:04 +0000 (0:00:00.565) 0:00:07.587 *********** 2025-07-06 19:41:05.197153 | orchestrator | =============================================================================== 2025-07-06 19:41:05.197164 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.68s 2025-07-06 19:41:05.197174 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.58s 2025-07-06 19:41:05.197185 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.57s 2025-07-06 19:41:05.197196 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.53s 2025-07-06 19:41:05.197206 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-07-06 19:41:05.446500 | orchestrator | + osism apply known-hosts 2025-07-06 19:41:17.277684 | orchestrator | 2025-07-06 19:41:17 | INFO  | Task ff60b584-848e-4b85-a050-e4c42dfaea57 (known-hosts) was prepared for execution. 2025-07-06 19:41:17.277768 | orchestrator | 2025-07-06 19:41:17 | INFO  | It takes a moment until task ff60b584-848e-4b85-a050-e4c42dfaea57 (known-hosts) has been started and output is visible here. 2025-07-06 19:41:33.511880 | orchestrator | 2025-07-06 19:41:33.511999 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-06 19:41:33.512017 | orchestrator | 2025-07-06 19:41:33.512029 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-06 19:41:33.512041 | orchestrator | Sunday 06 July 2025 19:41:21 +0000 (0:00:00.167) 0:00:00.168 *********** 2025-07-06 19:41:33.512054 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-06 19:41:33.512066 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-06 19:41:33.512077 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-06 19:41:33.512089 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-06 19:41:33.512099 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-06 19:41:33.512110 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-06 19:41:33.512121 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-06 19:41:33.512132 | orchestrator | 2025-07-06 19:41:33.512143 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-06 19:41:33.512156 | orchestrator | Sunday 06 July 2025 19:41:26 +0000 (0:00:05.729) 0:00:05.897 *********** 2025-07-06 19:41:33.512168 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-06 19:41:33.512180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-06 19:41:33.512191 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-06 19:41:33.512202 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-06 19:41:33.512236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-06 19:41:33.512248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-06 19:41:33.512270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-06 19:41:33.512282 | orchestrator | 2025-07-06 19:41:33.512322 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:33.512334 | orchestrator | Sunday 06 July 2025 19:41:27 +0000 (0:00:00.170) 0:00:06.067 *********** 2025-07-06 19:41:33.512346 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP0tvy+MrX119Kn8lGIHGLgIsKxy4MclICq0p/6l1S0/nqxaJi7o5YhftCQb/aL49caatIvSZokWVvW9AqF8dnc=) 2025-07-06 19:41:33.512363 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9wif4GPBaMuKjfDXD+ywi4bN79n99EBvEgndcX3zCQeMx6WXlMiRWS1GHM4+04wp+tmPYDboZG+kfCwPJMZk0iKqDQ8H8RztMH4yTMot9Mf6K6LcPqWtfzqemRDz9Cj2em/iIfZJknONnuTLeFASZuHo6Nt8ivOeNYqDByBWbVGDXGxGU+P+6GCceUI3vXiUM0UDeLng9cQLmyS8H8EEt7d0ZheVxeV3enpGFR/VcO7ruSqQ3rDo19ODZ6z9WcxeogCAKgOD0d3czDIiMeiBuTO03UuwpmEI8ZI8qLOdU09SQSNhwAECZJTytqxaUa2tdVbLap2TQbqUeOCHi5Sw627ZiWjEZbo9DkQsE8WWtl3Z15408Zq7e32j5B41gr4WYEwf+n3vrBmN8vWmoqGrcpAN00s5R4rT8NqlGqSU4vOS3mctXpPbbUIwNHifPfQbvIxYeJTrDArxasoRkTW6UKORCgjpx7QRVz9m/L+wkyqnp/L+EVPhlr8Sb0E8Adyc=) 2025-07-06 19:41:33.512377 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMShDMDtrzzr69bQMv7RAkDJXuLaTz1hFqkdsynqmlwV) 2025-07-06 19:41:33.512390 | orchestrator | 2025-07-06 19:41:33.512401 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:33.512412 | orchestrator | Sunday 06 July 2025 19:41:28 +0000 (0:00:01.161) 0:00:07.229 *********** 2025-07-06 19:41:33.512452 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDW4Hp5CJBvSPunGOUbShX/Z8ykk40TxTG4gYQ++UdlWoy/Lk9sW4fY7Xp2YIZ6SSfzazg3ylPTM7Zdb6zYnMdVy+fRpPw6S4NIoU88EmA5EPQWVOKKFm9X27A9iSoX9kQWM5z5E/PCfvfEdcJU9W3815twulDcgG9BL38lJPA7p28Yyp8WXsvd3LeQkNHdaAWZd1NUuKZDw2KIZSbOWweOm2Rc23NgSyvgXTCwPJpG17Z7strgdb9DFJC/HN8aA/2HR8L4hWlc6SdcTz2XeRIDa6m5dBWlDTW6e7ef37dn/S/aGlpcWvs0thmTjeMMXXIMzuL4kKtgqBBOjI5X6S8BZ9rIkLM6crI2a+tYQul0uTNBLOwuQxQiz7YLHjfRCR8H3uYNw1TcZnwGHpAgmeQyv3AeBNAuaiFLbDm25Y+gpOBTBymm1nf9HLTEYO+HMNLLL33IeTJ6aekLy6MzDj84GvI4Abp+bryu4V155SYfR6CqbzNhvG42vwFjt+Fbl1U=) 2025-07-06 19:41:33.512473 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJF5iAFwUMwJ8BQIcGJLlb08DRUfU7In6VRMzO6ldj9v+MwjjKMLDdWUVFbxYIbH7IikTl+Pog3t4lsLOsJtnj8=) 2025-07-06 19:41:33.512491 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICAvgK4fHqK+7LiXNum8oyHCPNLXZSu8aDnAq/5eNFpn) 2025-07-06 19:41:33.512509 | orchestrator | 2025-07-06 19:41:33.512528 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:33.512546 | orchestrator | Sunday 06 July 2025 19:41:29 +0000 (0:00:01.049) 0:00:08.278 *********** 2025-07-06 19:41:33.512565 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjLyBWUA7B0rd7j6yo1UPEW19aZwdveaV3nR8jW4GLHDrC0zWJ+v3pWoOobjeZt6jiOe9c2lHQOEYYmUK2IPrw=) 2025-07-06 19:41:33.512582 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDchw5+SCUaCJHfT4RyteA3ydi8OrlGPhmyhB2/VtXwytyikdQDlAm91uvTEGTzCqn2+l4Jg2KZjgvwO5pALt8N7nFOB76qtxVnS+GcgiJSMaCcBXsvcrVDrIKOkblz6shrh8L1VgmHzSjdgB46M+RcQlZ0VV//lEenUi7AQt+PtS7kWE0a7cGjCcf0iWub9PzV/cS7hDNZ19yTaHjnR1SN7oNFRH5QRMcvC6m07F11xnnHaxjem1nsyuDcqyUdYxCabP/DcVpoMDWzIhGVD+uL8be6yj9Mj67yiKNP0JUAf4W9Z0opY3pDj1hU2WulN3/YR82QAyd0FClwk0W2imifK+7GQNH/TVlv5fGMq/GOcnQQzaH1ax6FIbCrEcyEkcaYjH9NcBySYk/8dRlZ6BPGMH13m7jiz9amtGL15sTEoAobzQBIhEEBlJSNG1/uznEpOMXuYPx+qxdXB3J+LaqBkC8so+PggLfdfXlIZJHOSt8u4jVnovv20r1jHock340=) 2025-07-06 19:41:33.512606 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKEHtQD5FNBSFEW8PGED8DQNIqQJ9ZChbPYfFe17eMbo) 2025-07-06 19:41:33.512617 | orchestrator | 2025-07-06 19:41:33.512628 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:33.512638 | orchestrator | Sunday 06 July 2025 19:41:30 +0000 (0:00:01.049) 0:00:09.328 *********** 2025-07-06 19:41:33.512649 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBYQlnUf2h+va1BJArwX3CMnHlVjouL4iPM5wwL0mxWi2k4RZJljNm6ADTQ6sTSaGLSs3c150DtNFXfgyMf2mUU=) 2025-07-06 19:41:33.512725 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7k1LninVjnmRAYhjTSJkP1E/DJG4aNdLUYUQCX3swyMYwJwIgp8tVUfP75vhEh1jZXGs0cqUesE9s9/BLTTD2y8gQ8RvHRKiJ56Hy1n2DXXYH1nJZAS/4/mY8zB0jgQrhH3za9IJbOpA5UoMVHL1ZERXkTK44nYjY29GopuRPBvt6qJy+IvmvWMk0QT3VhQB9Bplj0Z897c0gflcr1yFnHr23LKknurl2Rg1ZEmcklxDG70i/2XVg9FVZ3x4V+s33IL8pZxYrGCSvuCwoEDfN5qcVlsA9Zb3qppKuEMvvH1mzJtNaLpd1wZ4hwHz7qMag/TTzOTYQfSRXqaCPwHLfdu/Xk24pCuHOARTiHe2kLUd8zEAdBtc61Ar8qHLdt0cl9/zdwegZZHJp+Itqtgks00icYc8R1kMGdO/J7OTxePK71S1VRo0ilBWXldQwQEXzCoEHj7VSvjVf1aGR154UfrgxfDk9y5zA6VQqKf4zzJMgQ83rbyCV4IFhnoy0huM=) 2025-07-06 19:41:33.512737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO6YIb6arR5ldF19Ll0Z0cugNymQRmc2hBwjeMauAdFS) 2025-07-06 19:41:33.512748 | orchestrator | 2025-07-06 19:41:33.512758 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:33.512769 | orchestrator | Sunday 06 July 2025 19:41:31 +0000 (0:00:01.073) 0:00:10.401 *********** 2025-07-06 19:41:33.512780 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPMiLS92e3AJFxpFGJwQ70WCS72kQSbM0PaybOexuYc1qZP2Jazw88ZnAIyev4c2OONM5eIk0iY+Vyx3/jQnmak=) 2025-07-06 19:41:33.512791 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8PJqqysJ5zPD+YS/WsGAU9cD3fWjvarckNTQWQl3/p) 2025-07-06 19:41:33.512803 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLSXiJwuF/F9CisN+gZNaxuNnJ3wuZElq6DtRA7zGMtGPm+J0BPWu2SNLfKD1cpY0Yai6Osej/lTpgfoHbE/1gpt6eJU90MNJ8jCFYbXntdsffuVPvD1sCaS4s9FlCbl5e12K65oujzeGYdnaUA2h7JSkOmOdo8JKJxzV6j5Mrl2cxhaClv9CUajP/7ws8bOHwbolCP7G/PFxVQ7R+XDL+8LfTql7T6wvxYpGnD7Y/ZG0pOO7LqDul1VGutKYQ5b4aK8tEo3E6ZUl/uqhhlffgoH10Evk4USHkIkp/bdv0esBgW5hvl0UUbd2ZS8XDrUwzB3CuATGjdYSAiqfPeLRAsVaa+YTIUlJw6ZjTK9lKRYi+wBiP/Dviyq5KP7vGyVFJAnHE86k2qmi2+MQTBlN0J01BHautW4/LaSpyawxSsvlAlXXvCwXBl35XgArvAlzs1RSRilidwa+d0S4UH4zT6I5K02x+FlCSXD6lMx8DNpivbxRqqQcW76/k822mfB8=) 2025-07-06 19:41:33.512814 | orchestrator | 2025-07-06 19:41:33.512847 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:33.512858 | orchestrator | Sunday 06 July 2025 19:41:32 +0000 (0:00:01.094) 0:00:11.495 *********** 2025-07-06 19:41:33.512881 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCg87R+s4thB9uEAX0DU8WNwAhWCJdO0yb4xlL/WG1tGXREOLuzbaSnAoaeXHKIjJRT65GBxA4NG3AJHFlavlcuLKv/zmc/VhSVAqb8udrllyMQs2qYiq+fEX2AG91+rhJ91FFU5aCIQT7rcI9o6hYIGgdIygPVt5cDnTkIAptSKxr1j+g1wvnZ7oTeam+o1NKPz0oa24eudKLFx6Xx2lcR+rl/4ucJEk5iyo0vd12SxNOLJYgmoSpB3FK7McBf68w6vSHgQqrQsAn8LB8GendosbMOoKHwgHpTpDm2ogRPtynzh/MSjYZhMYQEU5xrOUlyPiLtBIfszuuCT1Xn4zEzeNQqsIhm6BCwsgR1FYGqPLLsiQPvrqOfzywd+ywDEqwv/44de/3DQfNZhPTY3yO0mB8Ku0D2U755hijo7bC8LsKaZ5g+mnUgH7ceRkFGNW4wzJtk64/XJaQmZZ3OjL+qMXzeHjSVJDUyBJ/VzxvprC0KQdeoTo0j3nu9vP+k4hM=) 2025-07-06 19:41:45.223173 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPzPJdgcFHgYh+ISTnH/X/K1ivsoxYOmiD1v56cQtxs1) 2025-07-06 19:41:45.223268 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNabN9oFLUmfgJKNMv8rCg20UAEoF9hfi884pxXcvuqs4aEE3gcM68Ar6kOiaJT+0XT2fOgfFIBd/gs0Y1p+buQ=) 2025-07-06 19:41:45.223282 | orchestrator | 2025-07-06 19:41:45.223292 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:45.223303 | orchestrator | Sunday 06 July 2025 19:41:33 +0000 (0:00:01.050) 0:00:12.545 *********** 2025-07-06 19:41:45.223346 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEmB33UaJCJZLWUh3AusefgSVntss81p/8JuWcpHei+BW2Edw7SHwRinfJtGu6FpA8Ay3sQGvH7cjHCYn9bsWd7k/HnGltX41TyIjUdIj+5l2bx9TYuXljFROazGfkRuxl9ioTpo7b3IuAPurr2zjGrouCooKXkccSPnjsM1+cNAXn1Cp8ztSNxzyp/c7bLGiCE+zjjD3X9TD9mNda3JzhpzPvs8LEevIHe0hvbCpIBWqTGp4X0tMj1IcS4sofuIxuJuy5xIJ9t1O3tm8gnTz12hS8ql+Q9sI83eVSH2w3BL1qLApb5Z4q/DT9fdmZYO0II9cTR5RjoOy3qJ5GJx/EiILw4lPV9bzPoYjx44tSQLOzLgDLhOiFyBiLKOSAf1ZmN1TQaMmSMLo9oJhOBwfjDnaqg2HyrNyChbVX+vtPCTfq/2YqZedZR3sYyY4I4a0z3UWNvpA6DUnTLVE1OuedqiE4xh58A7b+EJ4umKNYayWX+HMjtb43OZ6nCGeprUc=) 2025-07-06 19:41:45.223353 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOkVVTy6YVrwQ7Rj6zMbUbLD5p2kQWtYd/2fwsOsKg7gSLS1LT3T0QmGOq1H0JpMLTo2JUdxWjnz9g7eDwNHy0c=) 2025-07-06 19:41:45.223358 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPq/GxejQuvbu+CorKorLpzR1OAfZexsqn66kE6XZWvI) 2025-07-06 19:41:45.223363 | orchestrator | 2025-07-06 19:41:45.223368 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-07-06 19:41:45.223373 | orchestrator | Sunday 06 July 2025 19:41:34 +0000 (0:00:01.048) 0:00:13.594 *********** 2025-07-06 19:41:45.223378 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-06 19:41:45.223383 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-06 19:41:45.223388 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-06 19:41:45.223392 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-06 19:41:45.223396 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-06 19:41:45.223401 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-06 19:41:45.223405 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-06 19:41:45.223409 | orchestrator | 2025-07-06 19:41:45.223414 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-07-06 19:41:45.223420 | orchestrator | Sunday 06 July 2025 19:41:39 +0000 (0:00:05.215) 0:00:18.809 *********** 2025-07-06 19:41:45.223439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-06 19:41:45.223446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-06 19:41:45.223451 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-06 19:41:45.223455 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-06 19:41:45.223459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-06 19:41:45.223481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-06 19:41:45.223486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-06 19:41:45.223490 | orchestrator | 2025-07-06 19:41:45.223494 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:45.223499 | orchestrator | Sunday 06 July 2025 19:41:39 +0000 (0:00:00.158) 0:00:18.968 *********** 2025-07-06 19:41:45.223503 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP0tvy+MrX119Kn8lGIHGLgIsKxy4MclICq0p/6l1S0/nqxaJi7o5YhftCQb/aL49caatIvSZokWVvW9AqF8dnc=) 2025-07-06 19:41:45.223521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9wif4GPBaMuKjfDXD+ywi4bN79n99EBvEgndcX3zCQeMx6WXlMiRWS1GHM4+04wp+tmPYDboZG+kfCwPJMZk0iKqDQ8H8RztMH4yTMot9Mf6K6LcPqWtfzqemRDz9Cj2em/iIfZJknONnuTLeFASZuHo6Nt8ivOeNYqDByBWbVGDXGxGU+P+6GCceUI3vXiUM0UDeLng9cQLmyS8H8EEt7d0ZheVxeV3enpGFR/VcO7ruSqQ3rDo19ODZ6z9WcxeogCAKgOD0d3czDIiMeiBuTO03UuwpmEI8ZI8qLOdU09SQSNhwAECZJTytqxaUa2tdVbLap2TQbqUeOCHi5Sw627ZiWjEZbo9DkQsE8WWtl3Z15408Zq7e32j5B41gr4WYEwf+n3vrBmN8vWmoqGrcpAN00s5R4rT8NqlGqSU4vOS3mctXpPbbUIwNHifPfQbvIxYeJTrDArxasoRkTW6UKORCgjpx7QRVz9m/L+wkyqnp/L+EVPhlr8Sb0E8Adyc=) 2025-07-06 19:41:45.223526 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMShDMDtrzzr69bQMv7RAkDJXuLaTz1hFqkdsynqmlwV) 2025-07-06 19:41:45.223531 | orchestrator | 2025-07-06 19:41:45.223535 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:45.223539 | orchestrator | Sunday 06 July 2025 19:41:40 +0000 (0:00:01.051) 0:00:20.019 *********** 2025-07-06 19:41:45.223544 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJF5iAFwUMwJ8BQIcGJLlb08DRUfU7In6VRMzO6ldj9v+MwjjKMLDdWUVFbxYIbH7IikTl+Pog3t4lsLOsJtnj8=) 2025-07-06 19:41:45.223548 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDW4Hp5CJBvSPunGOUbShX/Z8ykk40TxTG4gYQ++UdlWoy/Lk9sW4fY7Xp2YIZ6SSfzazg3ylPTM7Zdb6zYnMdVy+fRpPw6S4NIoU88EmA5EPQWVOKKFm9X27A9iSoX9kQWM5z5E/PCfvfEdcJU9W3815twulDcgG9BL38lJPA7p28Yyp8WXsvd3LeQkNHdaAWZd1NUuKZDw2KIZSbOWweOm2Rc23NgSyvgXTCwPJpG17Z7strgdb9DFJC/HN8aA/2HR8L4hWlc6SdcTz2XeRIDa6m5dBWlDTW6e7ef37dn/S/aGlpcWvs0thmTjeMMXXIMzuL4kKtgqBBOjI5X6S8BZ9rIkLM6crI2a+tYQul0uTNBLOwuQxQiz7YLHjfRCR8H3uYNw1TcZnwGHpAgmeQyv3AeBNAuaiFLbDm25Y+gpOBTBymm1nf9HLTEYO+HMNLLL33IeTJ6aekLy6MzDj84GvI4Abp+bryu4V155SYfR6CqbzNhvG42vwFjt+Fbl1U=) 2025-07-06 19:41:45.223553 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICAvgK4fHqK+7LiXNum8oyHCPNLXZSu8aDnAq/5eNFpn) 2025-07-06 19:41:45.223557 | orchestrator | 2025-07-06 19:41:45.223562 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:45.223566 | orchestrator | Sunday 06 July 2025 19:41:42 +0000 (0:00:01.084) 0:00:21.104 *********** 2025-07-06 19:41:45.223570 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKEHtQD5FNBSFEW8PGED8DQNIqQJ9ZChbPYfFe17eMbo) 2025-07-06 19:41:45.223575 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDchw5+SCUaCJHfT4RyteA3ydi8OrlGPhmyhB2/VtXwytyikdQDlAm91uvTEGTzCqn2+l4Jg2KZjgvwO5pALt8N7nFOB76qtxVnS+GcgiJSMaCcBXsvcrVDrIKOkblz6shrh8L1VgmHzSjdgB46M+RcQlZ0VV//lEenUi7AQt+PtS7kWE0a7cGjCcf0iWub9PzV/cS7hDNZ19yTaHjnR1SN7oNFRH5QRMcvC6m07F11xnnHaxjem1nsyuDcqyUdYxCabP/DcVpoMDWzIhGVD+uL8be6yj9Mj67yiKNP0JUAf4W9Z0opY3pDj1hU2WulN3/YR82QAyd0FClwk0W2imifK+7GQNH/TVlv5fGMq/GOcnQQzaH1ax6FIbCrEcyEkcaYjH9NcBySYk/8dRlZ6BPGMH13m7jiz9amtGL15sTEoAobzQBIhEEBlJSNG1/uznEpOMXuYPx+qxdXB3J+LaqBkC8so+PggLfdfXlIZJHOSt8u4jVnovv20r1jHock340=) 2025-07-06 19:41:45.223585 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjLyBWUA7B0rd7j6yo1UPEW19aZwdveaV3nR8jW4GLHDrC0zWJ+v3pWoOobjeZt6jiOe9c2lHQOEYYmUK2IPrw=) 2025-07-06 19:41:45.223589 | orchestrator | 2025-07-06 19:41:45.223594 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:45.223598 | orchestrator | Sunday 06 July 2025 19:41:43 +0000 (0:00:01.035) 0:00:22.140 *********** 2025-07-06 19:41:45.223603 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBYQlnUf2h+va1BJArwX3CMnHlVjouL4iPM5wwL0mxWi2k4RZJljNm6ADTQ6sTSaGLSs3c150DtNFXfgyMf2mUU=) 2025-07-06 19:41:45.223613 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7k1LninVjnmRAYhjTSJkP1E/DJG4aNdLUYUQCX3swyMYwJwIgp8tVUfP75vhEh1jZXGs0cqUesE9s9/BLTTD2y8gQ8RvHRKiJ56Hy1n2DXXYH1nJZAS/4/mY8zB0jgQrhH3za9IJbOpA5UoMVHL1ZERXkTK44nYjY29GopuRPBvt6qJy+IvmvWMk0QT3VhQB9Bplj0Z897c0gflcr1yFnHr23LKknurl2Rg1ZEmcklxDG70i/2XVg9FVZ3x4V+s33IL8pZxYrGCSvuCwoEDfN5qcVlsA9Zb3qppKuEMvvH1mzJtNaLpd1wZ4hwHz7qMag/TTzOTYQfSRXqaCPwHLfdu/Xk24pCuHOARTiHe2kLUd8zEAdBtc61Ar8qHLdt0cl9/zdwegZZHJp+Itqtgks00icYc8R1kMGdO/J7OTxePK71S1VRo0ilBWXldQwQEXzCoEHj7VSvjVf1aGR154UfrgxfDk9y5zA6VQqKf4zzJMgQ83rbyCV4IFhnoy0huM=) 2025-07-06 19:41:45.223625 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO6YIb6arR5ldF19Ll0Z0cugNymQRmc2hBwjeMauAdFS) 2025-07-06 19:41:49.347757 | orchestrator | 2025-07-06 19:41:49.347862 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:49.347878 | orchestrator | Sunday 06 July 2025 19:41:45 +0000 (0:00:02.112) 0:00:24.253 *********** 2025-07-06 19:41:49.347894 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLSXiJwuF/F9CisN+gZNaxuNnJ3wuZElq6DtRA7zGMtGPm+J0BPWu2SNLfKD1cpY0Yai6Osej/lTpgfoHbE/1gpt6eJU90MNJ8jCFYbXntdsffuVPvD1sCaS4s9FlCbl5e12K65oujzeGYdnaUA2h7JSkOmOdo8JKJxzV6j5Mrl2cxhaClv9CUajP/7ws8bOHwbolCP7G/PFxVQ7R+XDL+8LfTql7T6wvxYpGnD7Y/ZG0pOO7LqDul1VGutKYQ5b4aK8tEo3E6ZUl/uqhhlffgoH10Evk4USHkIkp/bdv0esBgW5hvl0UUbd2ZS8XDrUwzB3CuATGjdYSAiqfPeLRAsVaa+YTIUlJw6ZjTK9lKRYi+wBiP/Dviyq5KP7vGyVFJAnHE86k2qmi2+MQTBlN0J01BHautW4/LaSpyawxSsvlAlXXvCwXBl35XgArvAlzs1RSRilidwa+d0S4UH4zT6I5K02x+FlCSXD6lMx8DNpivbxRqqQcW76/k822mfB8=) 2025-07-06 19:41:49.347909 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPMiLS92e3AJFxpFGJwQ70WCS72kQSbM0PaybOexuYc1qZP2Jazw88ZnAIyev4c2OONM5eIk0iY+Vyx3/jQnmak=) 2025-07-06 19:41:49.347929 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN8PJqqysJ5zPD+YS/WsGAU9cD3fWjvarckNTQWQl3/p) 2025-07-06 19:41:49.347949 | orchestrator | 2025-07-06 19:41:49.347968 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:49.347987 | orchestrator | Sunday 06 July 2025 19:41:46 +0000 (0:00:01.064) 0:00:25.317 *********** 2025-07-06 19:41:49.348005 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNabN9oFLUmfgJKNMv8rCg20UAEoF9hfi884pxXcvuqs4aEE3gcM68Ar6kOiaJT+0XT2fOgfFIBd/gs0Y1p+buQ=) 2025-07-06 19:41:49.348026 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCg87R+s4thB9uEAX0DU8WNwAhWCJdO0yb4xlL/WG1tGXREOLuzbaSnAoaeXHKIjJRT65GBxA4NG3AJHFlavlcuLKv/zmc/VhSVAqb8udrllyMQs2qYiq+fEX2AG91+rhJ91FFU5aCIQT7rcI9o6hYIGgdIygPVt5cDnTkIAptSKxr1j+g1wvnZ7oTeam+o1NKPz0oa24eudKLFx6Xx2lcR+rl/4ucJEk5iyo0vd12SxNOLJYgmoSpB3FK7McBf68w6vSHgQqrQsAn8LB8GendosbMOoKHwgHpTpDm2ogRPtynzh/MSjYZhMYQEU5xrOUlyPiLtBIfszuuCT1Xn4zEzeNQqsIhm6BCwsgR1FYGqPLLsiQPvrqOfzywd+ywDEqwv/44de/3DQfNZhPTY3yO0mB8Ku0D2U755hijo7bC8LsKaZ5g+mnUgH7ceRkFGNW4wzJtk64/XJaQmZZ3OjL+qMXzeHjSVJDUyBJ/VzxvprC0KQdeoTo0j3nu9vP+k4hM=) 2025-07-06 19:41:49.348081 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPzPJdgcFHgYh+ISTnH/X/K1ivsoxYOmiD1v56cQtxs1) 2025-07-06 19:41:49.348094 | orchestrator | 2025-07-06 19:41:49.348105 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-06 19:41:49.348116 | orchestrator | Sunday 06 July 2025 19:41:47 +0000 (0:00:01.040) 0:00:26.358 *********** 2025-07-06 19:41:49.348127 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEmB33UaJCJZLWUh3AusefgSVntss81p/8JuWcpHei+BW2Edw7SHwRinfJtGu6FpA8Ay3sQGvH7cjHCYn9bsWd7k/HnGltX41TyIjUdIj+5l2bx9TYuXljFROazGfkRuxl9ioTpo7b3IuAPurr2zjGrouCooKXkccSPnjsM1+cNAXn1Cp8ztSNxzyp/c7bLGiCE+zjjD3X9TD9mNda3JzhpzPvs8LEevIHe0hvbCpIBWqTGp4X0tMj1IcS4sofuIxuJuy5xIJ9t1O3tm8gnTz12hS8ql+Q9sI83eVSH2w3BL1qLApb5Z4q/DT9fdmZYO0II9cTR5RjoOy3qJ5GJx/EiILw4lPV9bzPoYjx44tSQLOzLgDLhOiFyBiLKOSAf1ZmN1TQaMmSMLo9oJhOBwfjDnaqg2HyrNyChbVX+vtPCTfq/2YqZedZR3sYyY4I4a0z3UWNvpA6DUnTLVE1OuedqiE4xh58A7b+EJ4umKNYayWX+HMjtb43OZ6nCGeprUc=) 2025-07-06 19:41:49.348139 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOkVVTy6YVrwQ7Rj6zMbUbLD5p2kQWtYd/2fwsOsKg7gSLS1LT3T0QmGOq1H0JpMLTo2JUdxWjnz9g7eDwNHy0c=) 2025-07-06 19:41:49.348150 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPq/GxejQuvbu+CorKorLpzR1OAfZexsqn66kE6XZWvI) 2025-07-06 19:41:49.348161 | orchestrator | 2025-07-06 19:41:49.348171 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-07-06 19:41:49.348182 | orchestrator | Sunday 06 July 2025 19:41:48 +0000 (0:00:01.020) 0:00:27.378 *********** 2025-07-06 19:41:49.348194 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-06 19:41:49.348214 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-06 19:41:49.348233 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-06 19:41:49.348251 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-06 19:41:49.348269 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-06 19:41:49.348286 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-06 19:41:49.348304 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-06 19:41:49.348353 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:41:49.348373 | orchestrator | 2025-07-06 19:41:49.348417 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-07-06 19:41:49.348439 | orchestrator | Sunday 06 July 2025 19:41:48 +0000 (0:00:00.162) 0:00:27.541 *********** 2025-07-06 19:41:49.348458 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:41:49.348477 | orchestrator | 2025-07-06 19:41:49.348496 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-07-06 19:41:49.348518 | orchestrator | Sunday 06 July 2025 19:41:48 +0000 (0:00:00.071) 0:00:27.612 *********** 2025-07-06 19:41:49.348539 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:41:49.348558 | orchestrator | 2025-07-06 19:41:49.348571 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-07-06 19:41:49.348584 | orchestrator | Sunday 06 July 2025 19:41:48 +0000 (0:00:00.064) 0:00:27.676 *********** 2025-07-06 19:41:49.348596 | orchestrator | changed: [testbed-manager] 2025-07-06 19:41:49.348609 | orchestrator | 2025-07-06 19:41:49.348622 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:41:49.348634 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 19:41:49.348649 | orchestrator | 2025-07-06 19:41:49.348677 | orchestrator | 2025-07-06 19:41:49.348689 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:41:49.348699 | orchestrator | Sunday 06 July 2025 19:41:49 +0000 (0:00:00.476) 0:00:28.153 *********** 2025-07-06 19:41:49.348710 | orchestrator | =============================================================================== 2025-07-06 19:41:49.348734 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.73s 2025-07-06 19:41:49.348746 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.22s 2025-07-06 19:41:49.348757 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 2.11s 2025-07-06 19:41:49.348768 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-07-06 19:41:49.348779 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-07-06 19:41:49.348808 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-07-06 19:41:49.348819 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-07-06 19:41:49.348830 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-07-06 19:41:49.348841 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-06 19:41:49.348851 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-06 19:41:49.348862 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-06 19:41:49.348873 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-06 19:41:49.348883 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-06 19:41:49.348894 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-06 19:41:49.348905 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-07-06 19:41:49.348915 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-07-06 19:41:49.348926 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-07-06 19:41:49.348937 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-07-06 19:41:49.348948 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-07-06 19:41:49.348959 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-07-06 19:41:49.611420 | orchestrator | + osism apply squid 2025-07-06 19:42:01.491184 | orchestrator | 2025-07-06 19:42:01 | INFO  | Task d3970f76-3d23-46d0-ac6f-c7c73d3f97f9 (squid) was prepared for execution. 2025-07-06 19:42:01.491302 | orchestrator | 2025-07-06 19:42:01 | INFO  | It takes a moment until task d3970f76-3d23-46d0-ac6f-c7c73d3f97f9 (squid) has been started and output is visible here. 2025-07-06 19:43:56.607351 | orchestrator | 2025-07-06 19:43:56.607517 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-07-06 19:43:56.607538 | orchestrator | 2025-07-06 19:43:56.607551 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-07-06 19:43:56.607563 | orchestrator | Sunday 06 July 2025 19:42:05 +0000 (0:00:00.166) 0:00:00.166 *********** 2025-07-06 19:43:56.607574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-07-06 19:43:56.607587 | orchestrator | 2025-07-06 19:43:56.607598 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-07-06 19:43:56.607627 | orchestrator | Sunday 06 July 2025 19:42:05 +0000 (0:00:00.090) 0:00:00.256 *********** 2025-07-06 19:43:56.607638 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:56.607650 | orchestrator | 2025-07-06 19:43:56.607662 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-07-06 19:43:56.607673 | orchestrator | Sunday 06 July 2025 19:42:06 +0000 (0:00:01.418) 0:00:01.675 *********** 2025-07-06 19:43:56.607685 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-07-06 19:43:56.607695 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-07-06 19:43:56.607707 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-07-06 19:43:56.607745 | orchestrator | 2025-07-06 19:43:56.607757 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-07-06 19:43:56.607768 | orchestrator | Sunday 06 July 2025 19:42:08 +0000 (0:00:01.186) 0:00:02.862 *********** 2025-07-06 19:43:56.607778 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-07-06 19:43:56.607789 | orchestrator | 2025-07-06 19:43:56.607800 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-07-06 19:43:56.607811 | orchestrator | Sunday 06 July 2025 19:42:09 +0000 (0:00:01.021) 0:00:03.884 *********** 2025-07-06 19:43:56.607821 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:56.607832 | orchestrator | 2025-07-06 19:43:56.607843 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-07-06 19:43:56.607853 | orchestrator | Sunday 06 July 2025 19:42:09 +0000 (0:00:00.372) 0:00:04.256 *********** 2025-07-06 19:43:56.607864 | orchestrator | changed: [testbed-manager] 2025-07-06 19:43:56.607877 | orchestrator | 2025-07-06 19:43:56.607889 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-07-06 19:43:56.607901 | orchestrator | Sunday 06 July 2025 19:42:10 +0000 (0:00:00.880) 0:00:05.137 *********** 2025-07-06 19:43:56.607913 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-07-06 19:43:56.607926 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:56.607938 | orchestrator | 2025-07-06 19:43:56.607951 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-07-06 19:43:56.607964 | orchestrator | Sunday 06 July 2025 19:42:43 +0000 (0:00:32.700) 0:00:37.837 *********** 2025-07-06 19:43:56.607977 | orchestrator | changed: [testbed-manager] 2025-07-06 19:43:56.607989 | orchestrator | 2025-07-06 19:43:56.608001 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-07-06 19:43:56.608014 | orchestrator | Sunday 06 July 2025 19:42:55 +0000 (0:00:12.494) 0:00:50.331 *********** 2025-07-06 19:43:56.608026 | orchestrator | Pausing for 60 seconds 2025-07-06 19:43:56.608039 | orchestrator | changed: [testbed-manager] 2025-07-06 19:43:56.608051 | orchestrator | 2025-07-06 19:43:56.608064 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-07-06 19:43:56.608077 | orchestrator | Sunday 06 July 2025 19:43:55 +0000 (0:01:00.071) 0:01:50.403 *********** 2025-07-06 19:43:56.608090 | orchestrator | ok: [testbed-manager] 2025-07-06 19:43:56.608102 | orchestrator | 2025-07-06 19:43:56.608115 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-07-06 19:43:56.608128 | orchestrator | Sunday 06 July 2025 19:43:55 +0000 (0:00:00.068) 0:01:50.471 *********** 2025-07-06 19:43:56.608140 | orchestrator | changed: [testbed-manager] 2025-07-06 19:43:56.608152 | orchestrator | 2025-07-06 19:43:56.608165 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:43:56.608178 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:43:56.608191 | orchestrator | 2025-07-06 19:43:56.608203 | orchestrator | 2025-07-06 19:43:56.608216 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:43:56.608228 | orchestrator | Sunday 06 July 2025 19:43:56 +0000 (0:00:00.658) 0:01:51.130 *********** 2025-07-06 19:43:56.608240 | orchestrator | =============================================================================== 2025-07-06 19:43:56.608253 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-07-06 19:43:56.608265 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.70s 2025-07-06 19:43:56.608278 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.49s 2025-07-06 19:43:56.608290 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.42s 2025-07-06 19:43:56.608301 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2025-07-06 19:43:56.608311 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.02s 2025-07-06 19:43:56.608336 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2025-07-06 19:43:56.608355 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2025-07-06 19:43:56.608372 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-07-06 19:43:56.608391 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-07-06 19:43:56.608411 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-07-06 19:43:56.887100 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-06 19:43:56.887210 | orchestrator | ++ semver latest 9.0.0 2025-07-06 19:43:56.937192 | orchestrator | + [[ -1 -lt 0 ]] 2025-07-06 19:43:56.937291 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-06 19:43:56.937440 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-07-06 19:44:08.844630 | orchestrator | 2025-07-06 19:44:08 | INFO  | Task fcb0919f-0d98-46db-bc47-55d0aaea437f (operator) was prepared for execution. 2025-07-06 19:44:08.844747 | orchestrator | 2025-07-06 19:44:08 | INFO  | It takes a moment until task fcb0919f-0d98-46db-bc47-55d0aaea437f (operator) has been started and output is visible here. 2025-07-06 19:44:24.614739 | orchestrator | 2025-07-06 19:44:24.614852 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-07-06 19:44:24.614868 | orchestrator | 2025-07-06 19:44:24.614907 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 19:44:24.614919 | orchestrator | Sunday 06 July 2025 19:44:12 +0000 (0:00:00.149) 0:00:00.149 *********** 2025-07-06 19:44:24.614930 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:24.614942 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:24.614953 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:24.614964 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:24.614974 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:24.614985 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:24.614996 | orchestrator | 2025-07-06 19:44:24.615026 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-07-06 19:44:24.615038 | orchestrator | Sunday 06 July 2025 19:44:16 +0000 (0:00:03.309) 0:00:03.459 *********** 2025-07-06 19:44:24.615049 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:24.615060 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:24.615070 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:24.615081 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:24.615092 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:24.615102 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:24.615113 | orchestrator | 2025-07-06 19:44:24.615123 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-07-06 19:44:24.615134 | orchestrator | 2025-07-06 19:44:24.615145 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-06 19:44:24.615156 | orchestrator | Sunday 06 July 2025 19:44:16 +0000 (0:00:00.761) 0:00:04.221 *********** 2025-07-06 19:44:24.615167 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:24.615178 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:24.615189 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:24.615199 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:24.615210 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:24.615221 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:24.615231 | orchestrator | 2025-07-06 19:44:24.615242 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-06 19:44:24.615253 | orchestrator | Sunday 06 July 2025 19:44:16 +0000 (0:00:00.161) 0:00:04.383 *********** 2025-07-06 19:44:24.615264 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:44:24.615274 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:44:24.615285 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:44:24.615295 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:44:24.615306 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:44:24.615317 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:44:24.615327 | orchestrator | 2025-07-06 19:44:24.615338 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-06 19:44:24.615371 | orchestrator | Sunday 06 July 2025 19:44:17 +0000 (0:00:00.162) 0:00:04.546 *********** 2025-07-06 19:44:24.615383 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:24.615395 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:24.615405 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:24.615416 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:24.615426 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:24.615437 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:24.615447 | orchestrator | 2025-07-06 19:44:24.615458 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-06 19:44:24.615469 | orchestrator | Sunday 06 July 2025 19:44:17 +0000 (0:00:00.609) 0:00:05.155 *********** 2025-07-06 19:44:24.615504 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:24.615515 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:24.615526 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:24.615537 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:24.615548 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:24.615559 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:24.615569 | orchestrator | 2025-07-06 19:44:24.615580 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-06 19:44:24.615591 | orchestrator | Sunday 06 July 2025 19:44:18 +0000 (0:00:00.882) 0:00:06.038 *********** 2025-07-06 19:44:24.615602 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-07-06 19:44:24.615613 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-07-06 19:44:24.615624 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-07-06 19:44:24.615634 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-07-06 19:44:24.615645 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-07-06 19:44:24.615655 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-07-06 19:44:24.615666 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-07-06 19:44:24.615677 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-07-06 19:44:24.615687 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-07-06 19:44:24.615698 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-07-06 19:44:24.615709 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-07-06 19:44:24.615719 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-07-06 19:44:24.615730 | orchestrator | 2025-07-06 19:44:24.615741 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-06 19:44:24.615752 | orchestrator | Sunday 06 July 2025 19:44:19 +0000 (0:00:01.159) 0:00:07.198 *********** 2025-07-06 19:44:24.615762 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:24.615773 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:24.615784 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:24.615794 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:24.615805 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:24.615815 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:24.615826 | orchestrator | 2025-07-06 19:44:24.615837 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-06 19:44:24.615849 | orchestrator | Sunday 06 July 2025 19:44:21 +0000 (0:00:01.347) 0:00:08.545 *********** 2025-07-06 19:44:24.615860 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-07-06 19:44:24.615870 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-07-06 19:44:24.615881 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-07-06 19:44:24.615892 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:44:24.615920 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:44:24.615932 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:44:24.615943 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:44:24.615954 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:44:24.615972 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-07-06 19:44:24.615983 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-07-06 19:44:24.615994 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-07-06 19:44:24.616005 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-07-06 19:44:24.616015 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-07-06 19:44:24.616026 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-07-06 19:44:24.616036 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-07-06 19:44:24.616047 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:44:24.616057 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:44:24.616068 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:44:24.616079 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:44:24.616089 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:44:24.616100 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-07-06 19:44:24.616110 | orchestrator | 2025-07-06 19:44:24.616121 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-06 19:44:24.616133 | orchestrator | Sunday 06 July 2025 19:44:22 +0000 (0:00:01.290) 0:00:09.836 *********** 2025-07-06 19:44:24.616144 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:44:24.616155 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:44:24.616165 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:44:24.616176 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:44:24.616187 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:44:24.616197 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:44:24.616208 | orchestrator | 2025-07-06 19:44:24.616219 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-06 19:44:24.616229 | orchestrator | Sunday 06 July 2025 19:44:22 +0000 (0:00:00.147) 0:00:09.984 *********** 2025-07-06 19:44:24.616240 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:24.616251 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:24.616261 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:24.616272 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:24.616282 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:24.616293 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:24.616303 | orchestrator | 2025-07-06 19:44:24.616314 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-06 19:44:24.616325 | orchestrator | Sunday 06 July 2025 19:44:23 +0000 (0:00:00.611) 0:00:10.596 *********** 2025-07-06 19:44:24.616335 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:44:24.616346 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:44:24.616356 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:44:24.616367 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:44:24.616378 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:44:24.616388 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:44:24.616399 | orchestrator | 2025-07-06 19:44:24.616409 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-06 19:44:24.616420 | orchestrator | Sunday 06 July 2025 19:44:23 +0000 (0:00:00.193) 0:00:10.789 *********** 2025-07-06 19:44:24.616431 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-06 19:44:24.616441 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 19:44:24.616452 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:24.616463 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:24.616473 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 19:44:24.616503 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:24.616514 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 19:44:24.616525 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:24.616547 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 19:44:24.616558 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:24.616569 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-06 19:44:24.616580 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:24.616590 | orchestrator | 2025-07-06 19:44:24.616601 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-06 19:44:24.616612 | orchestrator | Sunday 06 July 2025 19:44:24 +0000 (0:00:00.794) 0:00:11.584 *********** 2025-07-06 19:44:24.616623 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:44:24.616634 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:44:24.616644 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:44:24.616655 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:44:24.616665 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:44:24.616676 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:44:24.616687 | orchestrator | 2025-07-06 19:44:24.616697 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-06 19:44:24.616708 | orchestrator | Sunday 06 July 2025 19:44:24 +0000 (0:00:00.141) 0:00:11.725 *********** 2025-07-06 19:44:24.616719 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:44:24.616729 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:44:24.616740 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:44:24.616751 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:44:24.616767 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:44:24.616778 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:44:24.616789 | orchestrator | 2025-07-06 19:44:24.616799 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-06 19:44:24.616810 | orchestrator | Sunday 06 July 2025 19:44:24 +0000 (0:00:00.148) 0:00:11.873 *********** 2025-07-06 19:44:24.616821 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:44:24.616832 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:44:24.616843 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:44:24.616853 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:44:24.616871 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:44:25.662878 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:44:25.662983 | orchestrator | 2025-07-06 19:44:25.663019 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-06 19:44:25.663033 | orchestrator | Sunday 06 July 2025 19:44:24 +0000 (0:00:00.142) 0:00:12.016 *********** 2025-07-06 19:44:25.663044 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:44:25.663055 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:44:25.663065 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:44:25.663076 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:44:25.663087 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:44:25.663098 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:44:25.663108 | orchestrator | 2025-07-06 19:44:25.663119 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-06 19:44:25.663130 | orchestrator | Sunday 06 July 2025 19:44:25 +0000 (0:00:00.634) 0:00:12.650 *********** 2025-07-06 19:44:25.663141 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:44:25.663152 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:44:25.663162 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:44:25.663173 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:44:25.663184 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:44:25.663194 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:44:25.663205 | orchestrator | 2025-07-06 19:44:25.663216 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:44:25.663228 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:44:25.663241 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:44:25.663273 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:44:25.663284 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:44:25.663295 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:44:25.663306 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:44:25.663316 | orchestrator | 2025-07-06 19:44:25.663327 | orchestrator | 2025-07-06 19:44:25.663338 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:44:25.663349 | orchestrator | Sunday 06 July 2025 19:44:25 +0000 (0:00:00.198) 0:00:12.849 *********** 2025-07-06 19:44:25.663360 | orchestrator | =============================================================================== 2025-07-06 19:44:25.663371 | orchestrator | Gathering Facts --------------------------------------------------------- 3.31s 2025-07-06 19:44:25.663381 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.35s 2025-07-06 19:44:25.663392 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2025-07-06 19:44:25.663404 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2025-07-06 19:44:25.663415 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.88s 2025-07-06 19:44:25.663427 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.79s 2025-07-06 19:44:25.663439 | orchestrator | Do not require tty for all users ---------------------------------------- 0.76s 2025-07-06 19:44:25.663452 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-07-06 19:44:25.663464 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2025-07-06 19:44:25.663475 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2025-07-06 19:44:25.663513 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2025-07-06 19:44:25.663526 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-07-06 19:44:25.663538 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-07-06 19:44:25.663551 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-07-06 19:44:25.663563 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-07-06 19:44:25.663576 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-07-06 19:44:25.663589 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-07-06 19:44:25.663602 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-07-06 19:44:25.930415 | orchestrator | + osism apply --environment custom facts 2025-07-06 19:44:27.692589 | orchestrator | 2025-07-06 19:44:27 | INFO  | Trying to run play facts in environment custom 2025-07-06 19:44:37.889253 | orchestrator | 2025-07-06 19:44:37 | INFO  | Task 3eb2ba1f-f6ed-4bf3-a56f-cb6941c857b7 (facts) was prepared for execution. 2025-07-06 19:44:37.889363 | orchestrator | 2025-07-06 19:44:37 | INFO  | It takes a moment until task 3eb2ba1f-f6ed-4bf3-a56f-cb6941c857b7 (facts) has been started and output is visible here. 2025-07-06 19:45:20.115319 | orchestrator | 2025-07-06 19:45:20.115401 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-07-06 19:45:20.115408 | orchestrator | 2025-07-06 19:45:20.115413 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-06 19:45:20.115417 | orchestrator | Sunday 06 July 2025 19:44:41 +0000 (0:00:00.085) 0:00:00.085 *********** 2025-07-06 19:45:20.115437 | orchestrator | ok: [testbed-manager] 2025-07-06 19:45:20.115442 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:45:20.115447 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:45:20.115451 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:45:20.115454 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:45:20.115458 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:45:20.115462 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:45:20.115465 | orchestrator | 2025-07-06 19:45:20.115469 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-07-06 19:45:20.115473 | orchestrator | Sunday 06 July 2025 19:44:43 +0000 (0:00:01.455) 0:00:01.541 *********** 2025-07-06 19:45:20.115477 | orchestrator | ok: [testbed-manager] 2025-07-06 19:45:20.115480 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:45:20.115484 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:45:20.115488 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:45:20.115491 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:45:20.115495 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:45:20.115499 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:45:20.115502 | orchestrator | 2025-07-06 19:45:20.115506 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-07-06 19:45:20.115510 | orchestrator | 2025-07-06 19:45:20.115513 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-06 19:45:20.115517 | orchestrator | Sunday 06 July 2025 19:44:44 +0000 (0:00:01.264) 0:00:02.805 *********** 2025-07-06 19:45:20.115521 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:20.115576 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:20.115581 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:20.115585 | orchestrator | 2025-07-06 19:45:20.115589 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-06 19:45:20.115593 | orchestrator | Sunday 06 July 2025 19:44:44 +0000 (0:00:00.114) 0:00:02.919 *********** 2025-07-06 19:45:20.115597 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:20.115601 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:20.115605 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:20.115608 | orchestrator | 2025-07-06 19:45:20.115612 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-06 19:45:20.115616 | orchestrator | Sunday 06 July 2025 19:44:44 +0000 (0:00:00.238) 0:00:03.157 *********** 2025-07-06 19:45:20.115620 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:20.115623 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:20.115627 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:20.115631 | orchestrator | 2025-07-06 19:45:20.115634 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-06 19:45:20.115638 | orchestrator | Sunday 06 July 2025 19:44:44 +0000 (0:00:00.207) 0:00:03.364 *********** 2025-07-06 19:45:20.115643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:45:20.115648 | orchestrator | 2025-07-06 19:45:20.115653 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-06 19:45:20.115657 | orchestrator | Sunday 06 July 2025 19:44:45 +0000 (0:00:00.150) 0:00:03.515 *********** 2025-07-06 19:45:20.115660 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:20.115664 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:20.115668 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:20.115671 | orchestrator | 2025-07-06 19:45:20.115675 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-06 19:45:20.115679 | orchestrator | Sunday 06 July 2025 19:44:45 +0000 (0:00:00.455) 0:00:03.970 *********** 2025-07-06 19:45:20.115682 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:45:20.115686 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:45:20.115690 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:45:20.115693 | orchestrator | 2025-07-06 19:45:20.115697 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-06 19:45:20.115705 | orchestrator | Sunday 06 July 2025 19:44:45 +0000 (0:00:00.134) 0:00:04.104 *********** 2025-07-06 19:45:20.115709 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:45:20.115713 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:45:20.115717 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:45:20.115720 | orchestrator | 2025-07-06 19:45:20.115724 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-06 19:45:20.115728 | orchestrator | Sunday 06 July 2025 19:44:46 +0000 (0:00:01.071) 0:00:05.176 *********** 2025-07-06 19:45:20.115731 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:20.115735 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:20.115739 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:20.115742 | orchestrator | 2025-07-06 19:45:20.115746 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-06 19:45:20.115750 | orchestrator | Sunday 06 July 2025 19:44:47 +0000 (0:00:00.456) 0:00:05.633 *********** 2025-07-06 19:45:20.115754 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:45:20.115757 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:45:20.115761 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:45:20.115765 | orchestrator | 2025-07-06 19:45:20.115769 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-06 19:45:20.115772 | orchestrator | Sunday 06 July 2025 19:44:48 +0000 (0:00:01.074) 0:00:06.707 *********** 2025-07-06 19:45:20.115776 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:45:20.115780 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:45:20.115784 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:45:20.115787 | orchestrator | 2025-07-06 19:45:20.115804 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-07-06 19:45:20.115808 | orchestrator | Sunday 06 July 2025 19:45:02 +0000 (0:00:14.236) 0:00:20.944 *********** 2025-07-06 19:45:20.115812 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:45:20.115815 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:45:20.115819 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:45:20.115823 | orchestrator | 2025-07-06 19:45:20.115827 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-07-06 19:45:20.115840 | orchestrator | Sunday 06 July 2025 19:45:02 +0000 (0:00:00.105) 0:00:21.049 *********** 2025-07-06 19:45:20.115844 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:45:20.115848 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:45:20.115861 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:45:20.115864 | orchestrator | 2025-07-06 19:45:20.115868 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-06 19:45:20.115872 | orchestrator | Sunday 06 July 2025 19:45:11 +0000 (0:00:08.330) 0:00:29.380 *********** 2025-07-06 19:45:20.115875 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:20.115879 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:20.115883 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:20.115886 | orchestrator | 2025-07-06 19:45:20.115890 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-06 19:45:20.115894 | orchestrator | Sunday 06 July 2025 19:45:11 +0000 (0:00:00.415) 0:00:29.796 *********** 2025-07-06 19:45:20.115898 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-07-06 19:45:20.115901 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-07-06 19:45:20.115905 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-07-06 19:45:20.115909 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-07-06 19:45:20.115912 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-07-06 19:45:20.115916 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-07-06 19:45:20.115920 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-07-06 19:45:20.115923 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-07-06 19:45:20.115927 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-07-06 19:45:20.115934 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-07-06 19:45:20.115938 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-07-06 19:45:20.115942 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-07-06 19:45:20.115945 | orchestrator | 2025-07-06 19:45:20.115949 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-06 19:45:20.115953 | orchestrator | Sunday 06 July 2025 19:45:15 +0000 (0:00:03.589) 0:00:33.385 *********** 2025-07-06 19:45:20.115957 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:20.115960 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:20.115964 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:20.115968 | orchestrator | 2025-07-06 19:45:20.115971 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 19:45:20.115975 | orchestrator | 2025-07-06 19:45:20.115979 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 19:45:20.115982 | orchestrator | Sunday 06 July 2025 19:45:16 +0000 (0:00:01.239) 0:00:34.625 *********** 2025-07-06 19:45:20.115986 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:45:20.115990 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:45:20.115993 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:45:20.115997 | orchestrator | ok: [testbed-manager] 2025-07-06 19:45:20.116001 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:20.116004 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:20.116008 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:20.116012 | orchestrator | 2025-07-06 19:45:20.116015 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:45:20.116020 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:45:20.116025 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:45:20.116030 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:45:20.116034 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:45:20.116038 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:45:20.116042 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:45:20.116045 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:45:20.116049 | orchestrator | 2025-07-06 19:45:20.116053 | orchestrator | 2025-07-06 19:45:20.116057 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:45:20.116060 | orchestrator | Sunday 06 July 2025 19:45:20 +0000 (0:00:03.839) 0:00:38.464 *********** 2025-07-06 19:45:20.116064 | orchestrator | =============================================================================== 2025-07-06 19:45:20.116068 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.24s 2025-07-06 19:45:20.116071 | orchestrator | Install required packages (Debian) -------------------------------------- 8.33s 2025-07-06 19:45:20.116075 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.84s 2025-07-06 19:45:20.116079 | orchestrator | Copy fact files --------------------------------------------------------- 3.59s 2025-07-06 19:45:20.116082 | orchestrator | Create custom facts directory ------------------------------------------- 1.46s 2025-07-06 19:45:20.116086 | orchestrator | Copy fact file ---------------------------------------------------------- 1.26s 2025-07-06 19:45:20.116092 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.24s 2025-07-06 19:45:20.301111 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.07s 2025-07-06 19:45:20.301260 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.07s 2025-07-06 19:45:20.301285 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-07-06 19:45:20.301303 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2025-07-06 19:45:20.301320 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-07-06 19:45:20.301338 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.24s 2025-07-06 19:45:20.301355 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-07-06 19:45:20.301372 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2025-07-06 19:45:20.301389 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2025-07-06 19:45:20.301407 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-07-06 19:45:20.301425 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-07-06 19:45:20.547458 | orchestrator | + osism apply bootstrap 2025-07-06 19:45:32.439741 | orchestrator | 2025-07-06 19:45:32 | INFO  | Task 8fb346b7-95fe-4c19-a81d-08963aa98137 (bootstrap) was prepared for execution. 2025-07-06 19:45:32.439856 | orchestrator | 2025-07-06 19:45:32 | INFO  | It takes a moment until task 8fb346b7-95fe-4c19-a81d-08963aa98137 (bootstrap) has been started and output is visible here. 2025-07-06 19:45:49.005610 | orchestrator | 2025-07-06 19:45:49.005744 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-07-06 19:45:49.005772 | orchestrator | 2025-07-06 19:45:49.005792 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-07-06 19:45:49.005811 | orchestrator | Sunday 06 July 2025 19:45:36 +0000 (0:00:00.163) 0:00:00.163 *********** 2025-07-06 19:45:49.005832 | orchestrator | ok: [testbed-manager] 2025-07-06 19:45:49.005850 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:45:49.005862 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:45:49.005873 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:45:49.005886 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:49.005905 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:49.005923 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:49.005941 | orchestrator | 2025-07-06 19:45:49.005960 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 19:45:49.005979 | orchestrator | 2025-07-06 19:45:49.005997 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 19:45:49.006079 | orchestrator | Sunday 06 July 2025 19:45:36 +0000 (0:00:00.235) 0:00:00.398 *********** 2025-07-06 19:45:49.006096 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:45:49.006109 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:45:49.006121 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:45:49.006135 | orchestrator | ok: [testbed-manager] 2025-07-06 19:45:49.006147 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:49.006160 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:49.006173 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:49.006185 | orchestrator | 2025-07-06 19:45:49.006199 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-07-06 19:45:49.006212 | orchestrator | 2025-07-06 19:45:49.006225 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 19:45:49.006239 | orchestrator | Sunday 06 July 2025 19:45:40 +0000 (0:00:03.747) 0:00:04.146 *********** 2025-07-06 19:45:49.006261 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-06 19:45:49.006280 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-06 19:45:49.006299 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-07-06 19:45:49.006319 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-06 19:45:49.006367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 19:45:49.006381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 19:45:49.006394 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-06 19:45:49.006407 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-07-06 19:45:49.006420 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 19:45:49.006433 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-06 19:45:49.006446 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-07-06 19:45:49.006458 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-06 19:45:49.006469 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-06 19:45:49.006480 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-06 19:45:49.006491 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-06 19:45:49.006502 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-06 19:45:49.006512 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-06 19:45:49.006523 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-06 19:45:49.006534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-07-06 19:45:49.006545 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-06 19:45:49.006592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-06 19:45:49.006611 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-07-06 19:45:49.006629 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-07-06 19:45:49.006648 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-06 19:45:49.006667 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-06 19:45:49.006687 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:45:49.006703 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:45:49.006714 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-06 19:45:49.006725 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-07-06 19:45:49.006736 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-07-06 19:45:49.006746 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-07-06 19:45:49.006756 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:45:49.006767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-06 19:45:49.006778 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-07-06 19:45:49.006789 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-06 19:45:49.006799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-06 19:45:49.006810 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-07-06 19:45:49.006820 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:45:49.006849 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-07-06 19:45:49.006861 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-06 19:45:49.006872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 19:45:49.006882 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-06 19:45:49.006893 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-06 19:45:49.006903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 19:45:49.006914 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-06 19:45:49.006925 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-07-06 19:45:49.006956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 19:45:49.006968 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:45:49.006979 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-06 19:45:49.006999 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-07-06 19:45:49.007011 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-07-06 19:45:49.007022 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-07-06 19:45:49.007041 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:45:49.007060 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-07-06 19:45:49.007079 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-07-06 19:45:49.007097 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:45:49.007115 | orchestrator | 2025-07-06 19:45:49.007132 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-07-06 19:45:49.007150 | orchestrator | 2025-07-06 19:45:49.007167 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-07-06 19:45:49.007185 | orchestrator | Sunday 06 July 2025 19:45:40 +0000 (0:00:00.444) 0:00:04.591 *********** 2025-07-06 19:45:49.007205 | orchestrator | ok: [testbed-manager] 2025-07-06 19:45:49.007224 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:45:49.007236 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:49.007254 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:49.007272 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:45:49.007290 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:45:49.007308 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:49.007326 | orchestrator | 2025-07-06 19:45:49.007345 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-07-06 19:45:49.007365 | orchestrator | Sunday 06 July 2025 19:45:43 +0000 (0:00:02.170) 0:00:06.762 *********** 2025-07-06 19:45:49.007384 | orchestrator | ok: [testbed-manager] 2025-07-06 19:45:49.007396 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:45:49.007406 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:45:49.007417 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:45:49.007428 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:45:49.007438 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:45:49.007449 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:45:49.007460 | orchestrator | 2025-07-06 19:45:49.007471 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-07-06 19:45:49.007482 | orchestrator | Sunday 06 July 2025 19:45:44 +0000 (0:00:01.212) 0:00:07.974 *********** 2025-07-06 19:45:49.007494 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:45:49.007507 | orchestrator | 2025-07-06 19:45:49.007518 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-07-06 19:45:49.007529 | orchestrator | Sunday 06 July 2025 19:45:44 +0000 (0:00:00.253) 0:00:08.227 *********** 2025-07-06 19:45:49.007540 | orchestrator | changed: [testbed-manager] 2025-07-06 19:45:49.007579 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:45:49.007596 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:45:49.007614 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:45:49.007627 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:45:49.007638 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:45:49.007648 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:45:49.007659 | orchestrator | 2025-07-06 19:45:49.007670 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-07-06 19:45:49.007681 | orchestrator | Sunday 06 July 2025 19:45:46 +0000 (0:00:01.977) 0:00:10.205 *********** 2025-07-06 19:45:49.007692 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:45:49.007704 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:45:49.007717 | orchestrator | 2025-07-06 19:45:49.007728 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-07-06 19:45:49.007739 | orchestrator | Sunday 06 July 2025 19:45:46 +0000 (0:00:00.263) 0:00:10.469 *********** 2025-07-06 19:45:49.007760 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:45:49.007770 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:45:49.007781 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:45:49.007792 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:45:49.007809 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:45:49.007821 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:45:49.007831 | orchestrator | 2025-07-06 19:45:49.007842 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-07-06 19:45:49.007853 | orchestrator | Sunday 06 July 2025 19:45:47 +0000 (0:00:01.079) 0:00:11.549 *********** 2025-07-06 19:45:49.007864 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:45:49.007874 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:45:49.007885 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:45:49.007896 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:45:49.007906 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:45:49.007917 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:45:49.007928 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:45:49.007938 | orchestrator | 2025-07-06 19:45:49.007949 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-07-06 19:45:49.007960 | orchestrator | Sunday 06 July 2025 19:45:48 +0000 (0:00:00.574) 0:00:12.124 *********** 2025-07-06 19:45:49.007971 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:45:49.007981 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:45:49.007992 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:45:49.008002 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:45:49.008013 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:45:49.008024 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:45:49.008035 | orchestrator | ok: [testbed-manager] 2025-07-06 19:45:49.008045 | orchestrator | 2025-07-06 19:45:49.008056 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-06 19:45:49.008068 | orchestrator | Sunday 06 July 2025 19:45:48 +0000 (0:00:00.414) 0:00:12.538 *********** 2025-07-06 19:45:49.008086 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:45:49.008098 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:45:49.008120 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:46:00.955127 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:46:00.955247 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:46:00.955292 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:46:00.955304 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:46:00.955316 | orchestrator | 2025-07-06 19:46:00.955329 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-06 19:46:00.955342 | orchestrator | Sunday 06 July 2025 19:45:49 +0000 (0:00:00.201) 0:00:12.739 *********** 2025-07-06 19:46:00.955355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:46:00.955384 | orchestrator | 2025-07-06 19:46:00.955396 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-06 19:46:00.955408 | orchestrator | Sunday 06 July 2025 19:45:49 +0000 (0:00:00.268) 0:00:13.008 *********** 2025-07-06 19:46:00.955419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:46:00.955431 | orchestrator | 2025-07-06 19:46:00.955442 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-06 19:46:00.955452 | orchestrator | Sunday 06 July 2025 19:45:49 +0000 (0:00:00.299) 0:00:13.307 *********** 2025-07-06 19:46:00.955463 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.955476 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:00.955486 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:00.955520 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:00.955532 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:00.955542 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:00.955553 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:00.955618 | orchestrator | 2025-07-06 19:46:00.955632 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-06 19:46:00.955643 | orchestrator | Sunday 06 July 2025 19:45:50 +0000 (0:00:01.224) 0:00:14.531 *********** 2025-07-06 19:46:00.955654 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:46:00.955667 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:46:00.955681 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:46:00.955693 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:46:00.955706 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:46:00.955718 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:46:00.955731 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:46:00.955743 | orchestrator | 2025-07-06 19:46:00.955756 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-06 19:46:00.955768 | orchestrator | Sunday 06 July 2025 19:45:51 +0000 (0:00:00.208) 0:00:14.740 *********** 2025-07-06 19:46:00.955781 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.955793 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:00.955806 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:00.955819 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:00.955831 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:00.955844 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:00.955856 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:00.955868 | orchestrator | 2025-07-06 19:46:00.955881 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-06 19:46:00.955893 | orchestrator | Sunday 06 July 2025 19:45:51 +0000 (0:00:00.555) 0:00:15.295 *********** 2025-07-06 19:46:00.955906 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:46:00.955919 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:46:00.955931 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:46:00.955944 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:46:00.955957 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:46:00.955969 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:46:00.955982 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:46:00.955995 | orchestrator | 2025-07-06 19:46:00.956008 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-06 19:46:00.956035 | orchestrator | Sunday 06 July 2025 19:45:51 +0000 (0:00:00.240) 0:00:15.536 *********** 2025-07-06 19:46:00.956047 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.956068 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:00.956079 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:00.956090 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:00.956101 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:46:00.956111 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:46:00.956122 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:46:00.956132 | orchestrator | 2025-07-06 19:46:00.956143 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-06 19:46:00.956155 | orchestrator | Sunday 06 July 2025 19:45:52 +0000 (0:00:00.545) 0:00:16.081 *********** 2025-07-06 19:46:00.956165 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.956176 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:00.956187 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:00.956197 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:46:00.956208 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:46:00.956219 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:00.956229 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:46:00.956240 | orchestrator | 2025-07-06 19:46:00.956251 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-06 19:46:00.956262 | orchestrator | Sunday 06 July 2025 19:45:53 +0000 (0:00:01.208) 0:00:17.289 *********** 2025-07-06 19:46:00.956272 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:00.956291 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.956302 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:00.956313 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:00.956324 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:00.956335 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:00.956346 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:00.956356 | orchestrator | 2025-07-06 19:46:00.956367 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-06 19:46:00.956379 | orchestrator | Sunday 06 July 2025 19:45:54 +0000 (0:00:01.141) 0:00:18.431 *********** 2025-07-06 19:46:00.956406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:46:00.956419 | orchestrator | 2025-07-06 19:46:00.956430 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-06 19:46:00.956441 | orchestrator | Sunday 06 July 2025 19:45:55 +0000 (0:00:00.389) 0:00:18.820 *********** 2025-07-06 19:46:00.956452 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:46:00.956463 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:00.956474 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:46:00.956485 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:46:00.956495 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:46:00.956506 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:00.956517 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:00.956528 | orchestrator | 2025-07-06 19:46:00.956539 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-06 19:46:00.956550 | orchestrator | Sunday 06 July 2025 19:45:56 +0000 (0:00:01.312) 0:00:20.133 *********** 2025-07-06 19:46:00.956583 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.956604 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:00.956624 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:00.956642 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:00.956661 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:00.956673 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:00.956683 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:00.956694 | orchestrator | 2025-07-06 19:46:00.956705 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-06 19:46:00.956715 | orchestrator | Sunday 06 July 2025 19:45:56 +0000 (0:00:00.219) 0:00:20.353 *********** 2025-07-06 19:46:00.956726 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.956737 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:00.956747 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:00.956758 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:00.956769 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:00.956779 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:00.956790 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:00.956801 | orchestrator | 2025-07-06 19:46:00.956811 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-06 19:46:00.956822 | orchestrator | Sunday 06 July 2025 19:45:56 +0000 (0:00:00.208) 0:00:20.561 *********** 2025-07-06 19:46:00.956833 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.956844 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:00.956855 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:00.956911 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:00.956923 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:00.956934 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:00.956945 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:00.956955 | orchestrator | 2025-07-06 19:46:00.956966 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-06 19:46:00.956977 | orchestrator | Sunday 06 July 2025 19:45:57 +0000 (0:00:00.251) 0:00:20.812 *********** 2025-07-06 19:46:00.956989 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:46:00.957010 | orchestrator | 2025-07-06 19:46:00.957021 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-06 19:46:00.957032 | orchestrator | Sunday 06 July 2025 19:45:57 +0000 (0:00:00.282) 0:00:21.094 *********** 2025-07-06 19:46:00.957043 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.957054 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:00.957064 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:00.957075 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:00.957086 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:00.957096 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:00.957107 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:00.957118 | orchestrator | 2025-07-06 19:46:00.957129 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-06 19:46:00.957140 | orchestrator | Sunday 06 July 2025 19:45:57 +0000 (0:00:00.571) 0:00:21.666 *********** 2025-07-06 19:46:00.957151 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:46:00.957161 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:46:00.957172 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:46:00.957183 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:46:00.957198 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:46:00.957209 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:46:00.957220 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:46:00.957231 | orchestrator | 2025-07-06 19:46:00.957241 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-06 19:46:00.957252 | orchestrator | Sunday 06 July 2025 19:45:58 +0000 (0:00:00.202) 0:00:21.869 *********** 2025-07-06 19:46:00.957263 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.957274 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:00.957284 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:00.957295 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:00.957306 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:00.957316 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:00.957327 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:00.957338 | orchestrator | 2025-07-06 19:46:00.957349 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-06 19:46:00.957360 | orchestrator | Sunday 06 July 2025 19:45:59 +0000 (0:00:01.047) 0:00:22.916 *********** 2025-07-06 19:46:00.957370 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.957381 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:00.957392 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:00.957402 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:00.957413 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:00.957423 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:00.957434 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:00.957445 | orchestrator | 2025-07-06 19:46:00.957455 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-06 19:46:00.957466 | orchestrator | Sunday 06 July 2025 19:45:59 +0000 (0:00:00.563) 0:00:23.479 *********** 2025-07-06 19:46:00.957477 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:00.957488 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:00.957499 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:00.957510 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:00.957528 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:37.771287 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.771391 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:37.771407 | orchestrator | 2025-07-06 19:46:37.771420 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-06 19:46:37.771434 | orchestrator | Sunday 06 July 2025 19:46:00 +0000 (0:00:01.147) 0:00:24.627 *********** 2025-07-06 19:46:37.771445 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.771456 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.771467 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.771478 | orchestrator | changed: [testbed-manager] 2025-07-06 19:46:37.771489 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:37.771518 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:37.771529 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:37.771540 | orchestrator | 2025-07-06 19:46:37.771551 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-07-06 19:46:37.771562 | orchestrator | Sunday 06 July 2025 19:46:14 +0000 (0:00:13.927) 0:00:38.555 *********** 2025-07-06 19:46:37.771573 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.771584 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.771627 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.771646 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.771665 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.771684 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.771703 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.771714 | orchestrator | 2025-07-06 19:46:37.771725 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-07-06 19:46:37.771736 | orchestrator | Sunday 06 July 2025 19:46:15 +0000 (0:00:00.216) 0:00:38.771 *********** 2025-07-06 19:46:37.771747 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.771758 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.771769 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.771779 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.771790 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.771800 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.771812 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.771825 | orchestrator | 2025-07-06 19:46:37.771837 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-07-06 19:46:37.771850 | orchestrator | Sunday 06 July 2025 19:46:15 +0000 (0:00:00.234) 0:00:39.006 *********** 2025-07-06 19:46:37.771862 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.771874 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.771885 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.771897 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.771909 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.771922 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.771932 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.771943 | orchestrator | 2025-07-06 19:46:37.771954 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-07-06 19:46:37.771965 | orchestrator | Sunday 06 July 2025 19:46:15 +0000 (0:00:00.227) 0:00:39.233 *********** 2025-07-06 19:46:37.771978 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:46:37.771992 | orchestrator | 2025-07-06 19:46:37.772003 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-07-06 19:46:37.772014 | orchestrator | Sunday 06 July 2025 19:46:15 +0000 (0:00:00.308) 0:00:39.542 *********** 2025-07-06 19:46:37.772025 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.772036 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.772046 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.772057 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.772068 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.772078 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.772089 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.772100 | orchestrator | 2025-07-06 19:46:37.772110 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-07-06 19:46:37.772121 | orchestrator | Sunday 06 July 2025 19:46:17 +0000 (0:00:01.853) 0:00:41.396 *********** 2025-07-06 19:46:37.772132 | orchestrator | changed: [testbed-manager] 2025-07-06 19:46:37.772143 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:37.772153 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:37.772164 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:37.772175 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:46:37.772186 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:46:37.772196 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:46:37.772215 | orchestrator | 2025-07-06 19:46:37.772236 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-07-06 19:46:37.772248 | orchestrator | Sunday 06 July 2025 19:46:18 +0000 (0:00:01.111) 0:00:42.507 *********** 2025-07-06 19:46:37.772258 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.772269 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.772280 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.772290 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.772301 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.772312 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.772322 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.772333 | orchestrator | 2025-07-06 19:46:37.772344 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-07-06 19:46:37.772355 | orchestrator | Sunday 06 July 2025 19:46:19 +0000 (0:00:00.869) 0:00:43.377 *********** 2025-07-06 19:46:37.772367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:46:37.772379 | orchestrator | 2025-07-06 19:46:37.772390 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-07-06 19:46:37.772401 | orchestrator | Sunday 06 July 2025 19:46:19 +0000 (0:00:00.293) 0:00:43.671 *********** 2025-07-06 19:46:37.772412 | orchestrator | changed: [testbed-manager] 2025-07-06 19:46:37.772423 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:37.772434 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:37.772444 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:37.772456 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:46:37.772466 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:46:37.772477 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:46:37.772488 | orchestrator | 2025-07-06 19:46:37.772516 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-07-06 19:46:37.772528 | orchestrator | Sunday 06 July 2025 19:46:21 +0000 (0:00:01.047) 0:00:44.718 *********** 2025-07-06 19:46:37.772538 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:46:37.772549 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:46:37.772563 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:46:37.772581 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:46:37.772621 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:46:37.772640 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:46:37.772660 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:46:37.772678 | orchestrator | 2025-07-06 19:46:37.772694 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-07-06 19:46:37.772705 | orchestrator | Sunday 06 July 2025 19:46:21 +0000 (0:00:00.296) 0:00:45.015 *********** 2025-07-06 19:46:37.772716 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:37.772727 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:37.772737 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:37.772748 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:46:37.772758 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:46:37.772769 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:46:37.772779 | orchestrator | changed: [testbed-manager] 2025-07-06 19:46:37.772790 | orchestrator | 2025-07-06 19:46:37.772800 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-07-06 19:46:37.772811 | orchestrator | Sunday 06 July 2025 19:46:32 +0000 (0:00:11.235) 0:00:56.251 *********** 2025-07-06 19:46:37.772828 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.772845 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.772862 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.772879 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.772896 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.772913 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.772932 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.772951 | orchestrator | 2025-07-06 19:46:37.772966 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-07-06 19:46:37.772987 | orchestrator | Sunday 06 July 2025 19:46:33 +0000 (0:00:01.110) 0:00:57.361 *********** 2025-07-06 19:46:37.773006 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.773023 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.773040 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.773057 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.773074 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.773093 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.773111 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.773126 | orchestrator | 2025-07-06 19:46:37.773137 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-07-06 19:46:37.773148 | orchestrator | Sunday 06 July 2025 19:46:34 +0000 (0:00:00.897) 0:00:58.259 *********** 2025-07-06 19:46:37.773159 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.773169 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.773180 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.773190 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.773201 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.773211 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.773222 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.773232 | orchestrator | 2025-07-06 19:46:37.773243 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-07-06 19:46:37.773254 | orchestrator | Sunday 06 July 2025 19:46:34 +0000 (0:00:00.205) 0:00:58.465 *********** 2025-07-06 19:46:37.773265 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.773275 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.773285 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.773296 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.773306 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.773317 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.773327 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.773338 | orchestrator | 2025-07-06 19:46:37.773349 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-07-06 19:46:37.773359 | orchestrator | Sunday 06 July 2025 19:46:35 +0000 (0:00:00.228) 0:00:58.693 *********** 2025-07-06 19:46:37.773370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:46:37.773382 | orchestrator | 2025-07-06 19:46:37.773392 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-07-06 19:46:37.773403 | orchestrator | Sunday 06 July 2025 19:46:35 +0000 (0:00:00.303) 0:00:58.997 *********** 2025-07-06 19:46:37.773414 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.773425 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.773435 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.773446 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.773456 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.773467 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.773477 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.773488 | orchestrator | 2025-07-06 19:46:37.773499 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-07-06 19:46:37.773509 | orchestrator | Sunday 06 July 2025 19:46:36 +0000 (0:00:01.618) 0:01:00.615 *********** 2025-07-06 19:46:37.773520 | orchestrator | changed: [testbed-manager] 2025-07-06 19:46:37.773530 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:46:37.773541 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:46:37.773552 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:46:37.773562 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:46:37.773573 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:46:37.773583 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:46:37.773619 | orchestrator | 2025-07-06 19:46:37.773631 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-07-06 19:46:37.773642 | orchestrator | Sunday 06 July 2025 19:46:37 +0000 (0:00:00.589) 0:01:01.204 *********** 2025-07-06 19:46:37.773661 | orchestrator | ok: [testbed-manager] 2025-07-06 19:46:37.773672 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:46:37.773682 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:46:37.773693 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:46:37.773703 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:46:37.773714 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:46:37.773724 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:46:37.773735 | orchestrator | 2025-07-06 19:46:37.773756 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-07-06 19:48:56.479084 | orchestrator | Sunday 06 July 2025 19:46:37 +0000 (0:00:00.244) 0:01:01.449 *********** 2025-07-06 19:48:56.479200 | orchestrator | ok: [testbed-manager] 2025-07-06 19:48:56.479216 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:56.479228 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:56.479239 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:56.479251 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:56.479262 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:56.479272 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:56.479283 | orchestrator | 2025-07-06 19:48:56.479295 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-07-06 19:48:56.479307 | orchestrator | Sunday 06 July 2025 19:46:38 +0000 (0:00:01.098) 0:01:02.548 *********** 2025-07-06 19:48:56.479318 | orchestrator | changed: [testbed-manager] 2025-07-06 19:48:56.479330 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:48:56.479340 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:48:56.479351 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:48:56.479362 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:48:56.479373 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:48:56.479383 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:48:56.479394 | orchestrator | 2025-07-06 19:48:56.479406 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-07-06 19:48:56.479416 | orchestrator | Sunday 06 July 2025 19:46:40 +0000 (0:00:01.576) 0:01:04.124 *********** 2025-07-06 19:48:56.479427 | orchestrator | ok: [testbed-manager] 2025-07-06 19:48:56.479439 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:56.479449 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:56.479460 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:56.479471 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:56.479502 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:56.479513 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:56.479524 | orchestrator | 2025-07-06 19:48:56.479535 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-07-06 19:48:56.479546 | orchestrator | Sunday 06 July 2025 19:46:42 +0000 (0:00:02.166) 0:01:06.291 *********** 2025-07-06 19:48:56.479557 | orchestrator | ok: [testbed-manager] 2025-07-06 19:48:56.479568 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:56.479578 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:56.479589 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:56.479600 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:56.479610 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:56.479621 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:56.479631 | orchestrator | 2025-07-06 19:48:56.479642 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-07-06 19:48:56.479653 | orchestrator | Sunday 06 July 2025 19:47:20 +0000 (0:00:38.112) 0:01:44.404 *********** 2025-07-06 19:48:56.479664 | orchestrator | changed: [testbed-manager] 2025-07-06 19:48:56.479675 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:48:56.479686 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:48:56.479696 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:48:56.479707 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:48:56.479718 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:48:56.479729 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:48:56.479774 | orchestrator | 2025-07-06 19:48:56.479792 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-07-06 19:48:56.479828 | orchestrator | Sunday 06 July 2025 19:48:37 +0000 (0:01:17.034) 0:03:01.439 *********** 2025-07-06 19:48:56.479847 | orchestrator | ok: [testbed-manager] 2025-07-06 19:48:56.479865 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:56.479882 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:56.479899 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:56.479917 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:56.479934 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:56.479952 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:56.479970 | orchestrator | 2025-07-06 19:48:56.479988 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-07-06 19:48:56.480007 | orchestrator | Sunday 06 July 2025 19:48:39 +0000 (0:00:01.733) 0:03:03.172 *********** 2025-07-06 19:48:56.480025 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:48:56.480042 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:48:56.480059 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:48:56.480077 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:48:56.480095 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:48:56.480113 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:48:56.480130 | orchestrator | changed: [testbed-manager] 2025-07-06 19:48:56.480147 | orchestrator | 2025-07-06 19:48:56.480165 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-07-06 19:48:56.480184 | orchestrator | Sunday 06 July 2025 19:48:51 +0000 (0:00:11.531) 0:03:14.703 *********** 2025-07-06 19:48:56.480221 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-07-06 19:48:56.480246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-07-06 19:48:56.480301 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-07-06 19:48:56.480329 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-07-06 19:48:56.480421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-07-06 19:48:56.480445 | orchestrator | 2025-07-06 19:48:56.480464 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-07-06 19:48:56.480481 | orchestrator | Sunday 06 July 2025 19:48:51 +0000 (0:00:00.355) 0:03:15.059 *********** 2025-07-06 19:48:56.480500 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-06 19:48:56.480518 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:56.480554 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-06 19:48:56.480572 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-06 19:48:56.480592 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:48:56.480609 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:48:56.480628 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-06 19:48:56.480645 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:48:56.480663 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 19:48:56.480681 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 19:48:56.480699 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 19:48:56.480717 | orchestrator | 2025-07-06 19:48:56.480781 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-07-06 19:48:56.480802 | orchestrator | Sunday 06 July 2025 19:48:51 +0000 (0:00:00.600) 0:03:15.659 *********** 2025-07-06 19:48:56.480821 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-06 19:48:56.480841 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-06 19:48:56.480860 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-06 19:48:56.480878 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-06 19:48:56.480897 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-06 19:48:56.480915 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-06 19:48:56.480934 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-06 19:48:56.480951 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-06 19:48:56.480980 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-06 19:48:56.481000 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-06 19:48:56.481019 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:56.481037 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-06 19:48:56.481055 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-06 19:48:56.481074 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-06 19:48:56.481092 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-06 19:48:56.481112 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-06 19:48:56.481131 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-06 19:48:56.481149 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-06 19:48:56.481167 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-06 19:48:56.481186 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-06 19:48:56.481206 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-06 19:48:56.481241 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-06 19:48:58.540130 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-06 19:48:58.541271 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-06 19:48:58.541318 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-06 19:48:58.541337 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:48:58.541355 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-06 19:48:58.541372 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-06 19:48:58.541389 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-06 19:48:58.541406 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-06 19:48:58.541422 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-06 19:48:58.541439 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-06 19:48:58.541456 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-06 19:48:58.541473 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-06 19:48:58.541489 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:48:58.541506 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-06 19:48:58.541525 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-06 19:48:58.541544 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-06 19:48:58.541562 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-06 19:48:58.541581 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-06 19:48:58.541598 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-06 19:48:58.541616 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-06 19:48:58.541632 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-06 19:48:58.541649 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:48:58.541665 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-06 19:48:58.541681 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-06 19:48:58.541698 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-06 19:48:58.541714 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-06 19:48:58.541731 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-06 19:48:58.541777 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-06 19:48:58.541795 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-06 19:48:58.541812 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-06 19:48:58.541829 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-06 19:48:58.541845 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-06 19:48:58.541862 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-06 19:48:58.541878 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-06 19:48:58.541907 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-06 19:48:58.541924 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-06 19:48:58.541941 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-06 19:48:58.541958 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-06 19:48:58.541974 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-06 19:48:58.541990 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-06 19:48:58.542008 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-06 19:48:58.542089 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-06 19:48:58.542106 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-06 19:48:58.542153 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-06 19:48:58.542171 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-06 19:48:58.542189 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-06 19:48:58.542206 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-06 19:48:58.542223 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-06 19:48:58.542240 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-06 19:48:58.542257 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-06 19:48:58.542275 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-06 19:48:58.542291 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-06 19:48:58.542370 | orchestrator | 2025-07-06 19:48:58.542388 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-07-06 19:48:58.542405 | orchestrator | Sunday 06 July 2025 19:48:56 +0000 (0:00:04.498) 0:03:20.157 *********** 2025-07-06 19:48:58.542421 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:48:58.542438 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:48:58.542455 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:48:58.542471 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:48:58.542487 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:48:58.542504 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:48:58.542523 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-06 19:48:58.542540 | orchestrator | 2025-07-06 19:48:58.542556 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-07-06 19:48:58.542616 | orchestrator | Sunday 06 July 2025 19:48:57 +0000 (0:00:00.648) 0:03:20.806 *********** 2025-07-06 19:48:58.542636 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-06 19:48:58.542676 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-06 19:48:58.542696 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:58.542714 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:48:58.542733 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-06 19:48:58.542806 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-06 19:48:58.542824 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:48:58.542841 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:48:58.542859 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-06 19:48:58.542878 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-06 19:48:58.542896 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-06 19:48:58.542913 | orchestrator | 2025-07-06 19:48:58.542932 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-07-06 19:48:58.542950 | orchestrator | Sunday 06 July 2025 19:48:57 +0000 (0:00:00.553) 0:03:21.360 *********** 2025-07-06 19:48:58.542977 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-06 19:48:58.542996 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:58.543014 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-06 19:48:58.543033 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:48:58.543051 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-06 19:48:58.543068 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:48:58.543087 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-06 19:48:58.543106 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:48:58.543122 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-06 19:48:58.543139 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-06 19:48:58.543156 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-06 19:48:58.543172 | orchestrator | 2025-07-06 19:48:58.543188 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-07-06 19:48:58.543205 | orchestrator | Sunday 06 July 2025 19:48:58 +0000 (0:00:00.607) 0:03:21.967 *********** 2025-07-06 19:48:58.543222 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:48:58.543239 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:48:58.543257 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:48:58.543275 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:48:58.543292 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:48:58.543328 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:49:10.264952 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:49:10.265049 | orchestrator | 2025-07-06 19:49:10.265061 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-07-06 19:49:10.265072 | orchestrator | Sunday 06 July 2025 19:48:58 +0000 (0:00:00.252) 0:03:22.220 *********** 2025-07-06 19:49:10.265080 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:10.265089 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:49:10.265097 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:49:10.265105 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:49:10.265113 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:49:10.265120 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:49:10.265128 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:49:10.265136 | orchestrator | 2025-07-06 19:49:10.265145 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-07-06 19:49:10.265154 | orchestrator | Sunday 06 July 2025 19:49:04 +0000 (0:00:05.698) 0:03:27.919 *********** 2025-07-06 19:49:10.265162 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-07-06 19:49:10.265170 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-07-06 19:49:10.265178 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:49:10.265186 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-07-06 19:49:10.265212 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:49:10.265220 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-07-06 19:49:10.265228 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:49:10.265236 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-07-06 19:49:10.265243 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:49:10.265251 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:49:10.265259 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-07-06 19:49:10.265266 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:49:10.265279 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-07-06 19:49:10.265292 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:49:10.265306 | orchestrator | 2025-07-06 19:49:10.265320 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-07-06 19:49:10.265333 | orchestrator | Sunday 06 July 2025 19:49:04 +0000 (0:00:00.302) 0:03:28.222 *********** 2025-07-06 19:49:10.265345 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-07-06 19:49:10.265358 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-07-06 19:49:10.265366 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-07-06 19:49:10.265374 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-07-06 19:49:10.265382 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-07-06 19:49:10.265389 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-07-06 19:49:10.265397 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-07-06 19:49:10.265405 | orchestrator | 2025-07-06 19:49:10.265413 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-07-06 19:49:10.265421 | orchestrator | Sunday 06 July 2025 19:49:05 +0000 (0:00:01.064) 0:03:29.286 *********** 2025-07-06 19:49:10.265431 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:49:10.265441 | orchestrator | 2025-07-06 19:49:10.265450 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-07-06 19:49:10.265463 | orchestrator | Sunday 06 July 2025 19:49:05 +0000 (0:00:00.396) 0:03:29.682 *********** 2025-07-06 19:49:10.265476 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:10.265489 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:49:10.265502 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:49:10.265516 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:49:10.265531 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:49:10.265545 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:49:10.265555 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:49:10.265565 | orchestrator | 2025-07-06 19:49:10.265574 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-07-06 19:49:10.265583 | orchestrator | Sunday 06 July 2025 19:49:07 +0000 (0:00:01.347) 0:03:31.030 *********** 2025-07-06 19:49:10.265592 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:10.265601 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:49:10.265609 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:49:10.265618 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:49:10.265642 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:49:10.265651 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:49:10.265660 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:49:10.265669 | orchestrator | 2025-07-06 19:49:10.265678 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-07-06 19:49:10.265687 | orchestrator | Sunday 06 July 2025 19:49:07 +0000 (0:00:00.602) 0:03:31.632 *********** 2025-07-06 19:49:10.265696 | orchestrator | changed: [testbed-manager] 2025-07-06 19:49:10.265705 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:10.265714 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:10.265723 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:10.265731 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:10.265740 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:10.265770 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:10.265788 | orchestrator | 2025-07-06 19:49:10.265797 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-07-06 19:49:10.265806 | orchestrator | Sunday 06 July 2025 19:49:08 +0000 (0:00:00.676) 0:03:32.308 *********** 2025-07-06 19:49:10.265815 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:10.265824 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:49:10.265833 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:49:10.265842 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:49:10.265851 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:49:10.265858 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:49:10.265866 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:49:10.265874 | orchestrator | 2025-07-06 19:49:10.265882 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-07-06 19:49:10.265890 | orchestrator | Sunday 06 July 2025 19:49:09 +0000 (0:00:00.614) 0:03:32.923 *********** 2025-07-06 19:49:10.265918 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829867.657191, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:10.265930 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829932.3021576, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:10.265939 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829936.9754572, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:10.265947 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829926.337194, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:10.265955 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829929.1980212, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:10.265968 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829936.210939, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:10.265981 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1751829936.5809014, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:10.266003 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829890.8317628, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:35.395008 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829833.8591719, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:35.395129 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829818.4220805, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:35.395145 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829821.052291, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:35.395158 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829825.0193896, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:35.395170 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829827.1730964, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:35.395207 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1751829828.4170372, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 19:49:35.395219 | orchestrator | 2025-07-06 19:49:35.395232 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-07-06 19:49:35.395245 | orchestrator | Sunday 06 July 2025 19:49:10 +0000 (0:00:01.015) 0:03:33.938 *********** 2025-07-06 19:49:35.395256 | orchestrator | changed: [testbed-manager] 2025-07-06 19:49:35.395268 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:35.395279 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:35.395289 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:35.395300 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:35.395311 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:35.395321 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:35.395332 | orchestrator | 2025-07-06 19:49:35.395343 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-07-06 19:49:35.395354 | orchestrator | Sunday 06 July 2025 19:49:11 +0000 (0:00:01.130) 0:03:35.069 *********** 2025-07-06 19:49:35.395365 | orchestrator | changed: [testbed-manager] 2025-07-06 19:49:35.395376 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:35.395387 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:35.395397 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:35.395426 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:35.395438 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:35.395448 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:35.395459 | orchestrator | 2025-07-06 19:49:35.395470 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-07-06 19:49:35.395499 | orchestrator | Sunday 06 July 2025 19:49:12 +0000 (0:00:01.188) 0:03:36.258 *********** 2025-07-06 19:49:35.395510 | orchestrator | changed: [testbed-manager] 2025-07-06 19:49:35.395521 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:35.395534 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:35.395547 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:35.395559 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:35.395571 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:35.395583 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:35.395596 | orchestrator | 2025-07-06 19:49:35.395608 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-07-06 19:49:35.395621 | orchestrator | Sunday 06 July 2025 19:49:13 +0000 (0:00:01.109) 0:03:37.367 *********** 2025-07-06 19:49:35.395632 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:49:35.395644 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:49:35.395657 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:49:35.395669 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:49:35.395680 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:49:35.395693 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:49:35.395705 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:49:35.395718 | orchestrator | 2025-07-06 19:49:35.395730 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-07-06 19:49:35.395742 | orchestrator | Sunday 06 July 2025 19:49:13 +0000 (0:00:00.252) 0:03:37.619 *********** 2025-07-06 19:49:35.395762 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:35.395796 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:49:35.395810 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:49:35.395823 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:49:35.395835 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:49:35.395847 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:49:35.395859 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:49:35.395872 | orchestrator | 2025-07-06 19:49:35.395885 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-07-06 19:49:35.395897 | orchestrator | Sunday 06 July 2025 19:49:14 +0000 (0:00:00.754) 0:03:38.374 *********** 2025-07-06 19:49:35.395910 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:49:35.395923 | orchestrator | 2025-07-06 19:49:35.395934 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-07-06 19:49:35.395945 | orchestrator | Sunday 06 July 2025 19:49:15 +0000 (0:00:00.383) 0:03:38.758 *********** 2025-07-06 19:49:35.395956 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:35.395967 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:35.395977 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:35.395988 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:35.395999 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:35.396009 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:35.396019 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:35.396030 | orchestrator | 2025-07-06 19:49:35.396041 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-07-06 19:49:35.396051 | orchestrator | Sunday 06 July 2025 19:49:23 +0000 (0:00:08.273) 0:03:47.031 *********** 2025-07-06 19:49:35.396062 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:35.396072 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:49:35.396083 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:49:35.396094 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:49:35.396104 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:49:35.396114 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:49:35.396125 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:49:35.396135 | orchestrator | 2025-07-06 19:49:35.396146 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-07-06 19:49:35.396162 | orchestrator | Sunday 06 July 2025 19:49:24 +0000 (0:00:01.205) 0:03:48.237 *********** 2025-07-06 19:49:35.396173 | orchestrator | ok: [testbed-manager] 2025-07-06 19:49:35.396184 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:49:35.396194 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:49:35.396204 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:49:35.396215 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:49:35.396226 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:49:35.396236 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:49:35.396247 | orchestrator | 2025-07-06 19:49:35.396258 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-07-06 19:49:35.396268 | orchestrator | Sunday 06 July 2025 19:49:25 +0000 (0:00:01.140) 0:03:49.377 *********** 2025-07-06 19:49:35.396279 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:49:35.396290 | orchestrator | 2025-07-06 19:49:35.396301 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-07-06 19:49:35.396311 | orchestrator | Sunday 06 July 2025 19:49:26 +0000 (0:00:00.515) 0:03:49.893 *********** 2025-07-06 19:49:35.396322 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:49:35.396333 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:35.396343 | orchestrator | changed: [testbed-manager] 2025-07-06 19:49:35.396354 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:49:35.396365 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:49:35.396383 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:35.396394 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:49:35.396404 | orchestrator | 2025-07-06 19:49:35.396415 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-07-06 19:49:35.396426 | orchestrator | Sunday 06 July 2025 19:49:34 +0000 (0:00:08.601) 0:03:58.494 *********** 2025-07-06 19:49:35.396436 | orchestrator | changed: [testbed-manager] 2025-07-06 19:49:35.396447 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:49:35.396458 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:49:35.396475 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:42.614098 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:42.614206 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:42.614220 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:42.614232 | orchestrator | 2025-07-06 19:50:42.614245 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-07-06 19:50:42.614268 | orchestrator | Sunday 06 July 2025 19:49:35 +0000 (0:00:00.577) 0:03:59.072 *********** 2025-07-06 19:50:42.614280 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:42.614291 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:42.614302 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:42.614313 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:42.614324 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:42.614335 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:42.614346 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:42.614357 | orchestrator | 2025-07-06 19:50:42.614368 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-07-06 19:50:42.614380 | orchestrator | Sunday 06 July 2025 19:49:36 +0000 (0:00:01.068) 0:04:00.141 *********** 2025-07-06 19:50:42.614390 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:42.614401 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:42.614412 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:42.614425 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:42.614443 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:42.614460 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:42.614471 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:42.614482 | orchestrator | 2025-07-06 19:50:42.614493 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-07-06 19:50:42.614504 | orchestrator | Sunday 06 July 2025 19:49:37 +0000 (0:00:01.023) 0:04:01.164 *********** 2025-07-06 19:50:42.614515 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:42.614527 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:42.614539 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:42.614550 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:42.614560 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:42.614571 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:42.614582 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:42.614593 | orchestrator | 2025-07-06 19:50:42.614618 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-07-06 19:50:42.614641 | orchestrator | Sunday 06 July 2025 19:49:37 +0000 (0:00:00.265) 0:04:01.430 *********** 2025-07-06 19:50:42.614652 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:42.614663 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:42.614674 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:42.614685 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:42.614696 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:42.614706 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:42.614717 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:42.614728 | orchestrator | 2025-07-06 19:50:42.614739 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-07-06 19:50:42.614750 | orchestrator | Sunday 06 July 2025 19:49:38 +0000 (0:00:00.294) 0:04:01.724 *********** 2025-07-06 19:50:42.614762 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:42.614772 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:42.614783 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:42.614818 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:42.614829 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:42.614881 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:42.614892 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:42.614903 | orchestrator | 2025-07-06 19:50:42.614914 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-07-06 19:50:42.614925 | orchestrator | Sunday 06 July 2025 19:49:38 +0000 (0:00:00.271) 0:04:01.996 *********** 2025-07-06 19:50:42.614936 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:42.614946 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:42.614957 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:42.614968 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:42.614979 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:42.614989 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:42.615000 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:42.615011 | orchestrator | 2025-07-06 19:50:42.615022 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-07-06 19:50:42.615049 | orchestrator | Sunday 06 July 2025 19:49:43 +0000 (0:00:05.639) 0:04:07.636 *********** 2025-07-06 19:50:42.615062 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:50:42.615076 | orchestrator | 2025-07-06 19:50:42.615087 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-07-06 19:50:42.615098 | orchestrator | Sunday 06 July 2025 19:49:44 +0000 (0:00:00.379) 0:04:08.015 *********** 2025-07-06 19:50:42.615109 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-07-06 19:50:42.615120 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-07-06 19:50:42.615131 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-07-06 19:50:42.615142 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-07-06 19:50:42.615153 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:42.615164 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-07-06 19:50:42.615175 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:42.615186 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-07-06 19:50:42.615197 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-07-06 19:50:42.615207 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-07-06 19:50:42.615218 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:42.615229 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:42.615240 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-07-06 19:50:42.615251 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-07-06 19:50:42.615261 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-07-06 19:50:42.615272 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-07-06 19:50:42.615283 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:42.615311 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:42.615322 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-07-06 19:50:42.615333 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-07-06 19:50:42.615344 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:42.615355 | orchestrator | 2025-07-06 19:50:42.615366 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-07-06 19:50:42.615376 | orchestrator | Sunday 06 July 2025 19:49:44 +0000 (0:00:00.324) 0:04:08.339 *********** 2025-07-06 19:50:42.615388 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:50:42.615399 | orchestrator | 2025-07-06 19:50:42.615410 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-07-06 19:50:42.615429 | orchestrator | Sunday 06 July 2025 19:49:45 +0000 (0:00:00.370) 0:04:08.710 *********** 2025-07-06 19:50:42.615440 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-07-06 19:50:42.615451 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-07-06 19:50:42.615461 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:42.615472 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-07-06 19:50:42.615483 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:42.615494 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:42.615505 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-07-06 19:50:42.615515 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-07-06 19:50:42.615526 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:42.615537 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:42.615548 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-07-06 19:50:42.615558 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:42.615569 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-07-06 19:50:42.615580 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:42.615591 | orchestrator | 2025-07-06 19:50:42.615601 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-07-06 19:50:42.615612 | orchestrator | Sunday 06 July 2025 19:49:45 +0000 (0:00:00.323) 0:04:09.034 *********** 2025-07-06 19:50:42.615623 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:50:42.615635 | orchestrator | 2025-07-06 19:50:42.615645 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-07-06 19:50:42.615656 | orchestrator | Sunday 06 July 2025 19:49:45 +0000 (0:00:00.507) 0:04:09.542 *********** 2025-07-06 19:50:42.615667 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:42.615678 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:42.615688 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:42.615699 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:42.615710 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:42.615720 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:42.615731 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:42.615741 | orchestrator | 2025-07-06 19:50:42.615752 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-07-06 19:50:42.615763 | orchestrator | Sunday 06 July 2025 19:50:19 +0000 (0:00:34.009) 0:04:43.551 *********** 2025-07-06 19:50:42.615774 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:42.615785 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:42.615795 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:42.615806 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:42.615822 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:42.615849 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:42.615860 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:42.615871 | orchestrator | 2025-07-06 19:50:42.615882 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-07-06 19:50:42.615893 | orchestrator | Sunday 06 July 2025 19:50:27 +0000 (0:00:07.912) 0:04:51.464 *********** 2025-07-06 19:50:42.615904 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:42.615914 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:42.615925 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:42.615936 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:42.615946 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:42.615957 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:42.615968 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:42.615978 | orchestrator | 2025-07-06 19:50:42.615989 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-07-06 19:50:42.616008 | orchestrator | Sunday 06 July 2025 19:50:35 +0000 (0:00:07.384) 0:04:58.849 *********** 2025-07-06 19:50:42.616020 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:42.616030 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:42.616041 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:42.616052 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:42.616063 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:42.616073 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:42.616084 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:42.616095 | orchestrator | 2025-07-06 19:50:42.616105 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-07-06 19:50:42.616116 | orchestrator | Sunday 06 July 2025 19:50:36 +0000 (0:00:01.716) 0:05:00.565 *********** 2025-07-06 19:50:42.616127 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:42.616138 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:42.616149 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:42.616159 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:42.616170 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:42.616181 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:42.616192 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:42.616202 | orchestrator | 2025-07-06 19:50:42.616214 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-07-06 19:50:42.616231 | orchestrator | Sunday 06 July 2025 19:50:42 +0000 (0:00:05.719) 0:05:06.285 *********** 2025-07-06 19:50:53.459542 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:50:53.459687 | orchestrator | 2025-07-06 19:50:53.459725 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-07-06 19:50:53.459749 | orchestrator | Sunday 06 July 2025 19:50:42 +0000 (0:00:00.401) 0:05:06.686 *********** 2025-07-06 19:50:53.459767 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:53.459786 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:53.459805 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:53.459822 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:53.459936 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:53.459963 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:53.459980 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:53.459998 | orchestrator | 2025-07-06 19:50:53.460016 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-07-06 19:50:53.460035 | orchestrator | Sunday 06 July 2025 19:50:43 +0000 (0:00:00.694) 0:05:07.381 *********** 2025-07-06 19:50:53.460052 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:53.460072 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:53.460090 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:53.460109 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:53.460128 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:53.460147 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:53.460165 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:53.460184 | orchestrator | 2025-07-06 19:50:53.460196 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-07-06 19:50:53.460208 | orchestrator | Sunday 06 July 2025 19:50:45 +0000 (0:00:01.671) 0:05:09.052 *********** 2025-07-06 19:50:53.460219 | orchestrator | changed: [testbed-manager] 2025-07-06 19:50:53.460230 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:50:53.460241 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:50:53.460252 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:50:53.460263 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:50:53.460274 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:50:53.460285 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:50:53.460295 | orchestrator | 2025-07-06 19:50:53.460306 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-07-06 19:50:53.460317 | orchestrator | Sunday 06 July 2025 19:50:46 +0000 (0:00:00.761) 0:05:09.813 *********** 2025-07-06 19:50:53.460357 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:53.460369 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:53.460379 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:53.460390 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:53.460400 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:53.460411 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:53.460425 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:53.460444 | orchestrator | 2025-07-06 19:50:53.460462 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-07-06 19:50:53.460480 | orchestrator | Sunday 06 July 2025 19:50:46 +0000 (0:00:00.261) 0:05:10.075 *********** 2025-07-06 19:50:53.460507 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:53.460526 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:53.460543 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:53.460561 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:53.460579 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:53.460597 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:53.460617 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:53.460636 | orchestrator | 2025-07-06 19:50:53.460654 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-07-06 19:50:53.460673 | orchestrator | Sunday 06 July 2025 19:50:46 +0000 (0:00:00.386) 0:05:10.462 *********** 2025-07-06 19:50:53.460690 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:53.460701 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:53.460712 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:53.460723 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:53.460734 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:53.460748 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:53.460767 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:53.460785 | orchestrator | 2025-07-06 19:50:53.460803 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-07-06 19:50:53.460830 | orchestrator | Sunday 06 July 2025 19:50:47 +0000 (0:00:00.267) 0:05:10.729 *********** 2025-07-06 19:50:53.460886 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:53.460905 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:53.460922 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:53.460939 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:53.460957 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:53.460975 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:53.460992 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:53.461011 | orchestrator | 2025-07-06 19:50:53.461029 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-07-06 19:50:53.461048 | orchestrator | Sunday 06 July 2025 19:50:47 +0000 (0:00:00.290) 0:05:11.020 *********** 2025-07-06 19:50:53.461060 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:53.461070 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:53.461081 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:53.461092 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:53.461102 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:53.461112 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:53.461123 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:53.461134 | orchestrator | 2025-07-06 19:50:53.461144 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-07-06 19:50:53.461155 | orchestrator | Sunday 06 July 2025 19:50:47 +0000 (0:00:00.327) 0:05:11.348 *********** 2025-07-06 19:50:53.461166 | orchestrator | ok: [testbed-manager] =>  2025-07-06 19:50:53.461176 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:50:53.461187 | orchestrator | ok: [testbed-node-0] =>  2025-07-06 19:50:53.461198 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:50:53.461208 | orchestrator | ok: [testbed-node-1] =>  2025-07-06 19:50:53.461219 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:50:53.461229 | orchestrator | ok: [testbed-node-2] =>  2025-07-06 19:50:53.461240 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:50:53.461265 | orchestrator | ok: [testbed-node-3] =>  2025-07-06 19:50:53.461276 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:50:53.461310 | orchestrator | ok: [testbed-node-4] =>  2025-07-06 19:50:53.461321 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:50:53.461332 | orchestrator | ok: [testbed-node-5] =>  2025-07-06 19:50:53.461342 | orchestrator |  docker_version: 5:27.5.1 2025-07-06 19:50:53.461353 | orchestrator | 2025-07-06 19:50:53.461364 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-07-06 19:50:53.461374 | orchestrator | Sunday 06 July 2025 19:50:47 +0000 (0:00:00.284) 0:05:11.633 *********** 2025-07-06 19:50:53.461385 | orchestrator | ok: [testbed-manager] =>  2025-07-06 19:50:53.461396 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:50:53.461406 | orchestrator | ok: [testbed-node-0] =>  2025-07-06 19:50:53.461417 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:50:53.461427 | orchestrator | ok: [testbed-node-1] =>  2025-07-06 19:50:53.461456 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:50:53.461467 | orchestrator | ok: [testbed-node-2] =>  2025-07-06 19:50:53.461478 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:50:53.461488 | orchestrator | ok: [testbed-node-3] =>  2025-07-06 19:50:53.461499 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:50:53.461510 | orchestrator | ok: [testbed-node-4] =>  2025-07-06 19:50:53.461520 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:50:53.461530 | orchestrator | ok: [testbed-node-5] =>  2025-07-06 19:50:53.461541 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-06 19:50:53.461551 | orchestrator | 2025-07-06 19:50:53.461562 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-07-06 19:50:53.461573 | orchestrator | Sunday 06 July 2025 19:50:48 +0000 (0:00:00.437) 0:05:12.071 *********** 2025-07-06 19:50:53.461583 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:53.461594 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:53.461604 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:53.461615 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:53.461625 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:53.461636 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:53.461646 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:53.461657 | orchestrator | 2025-07-06 19:50:53.461667 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-07-06 19:50:53.461678 | orchestrator | Sunday 06 July 2025 19:50:48 +0000 (0:00:00.289) 0:05:12.361 *********** 2025-07-06 19:50:53.461689 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:53.461699 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:53.461710 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:53.461720 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:50:53.461730 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:50:53.461741 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:50:53.461751 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:50:53.461762 | orchestrator | 2025-07-06 19:50:53.461772 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-07-06 19:50:53.461783 | orchestrator | Sunday 06 July 2025 19:50:48 +0000 (0:00:00.276) 0:05:12.637 *********** 2025-07-06 19:50:53.461797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:50:53.461810 | orchestrator | 2025-07-06 19:50:53.461821 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-07-06 19:50:53.461832 | orchestrator | Sunday 06 July 2025 19:50:49 +0000 (0:00:00.384) 0:05:13.022 *********** 2025-07-06 19:50:53.461877 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:53.461898 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:53.461917 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:53.461935 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:53.461966 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:53.461976 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:53.461987 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:53.461997 | orchestrator | 2025-07-06 19:50:53.462008 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-07-06 19:50:53.462091 | orchestrator | Sunday 06 July 2025 19:50:50 +0000 (0:00:00.794) 0:05:13.816 *********** 2025-07-06 19:50:53.462104 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:50:53.462114 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:50:53.462125 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:50:53.462136 | orchestrator | ok: [testbed-manager] 2025-07-06 19:50:53.462146 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:50:53.462157 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:50:53.462167 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:50:53.462178 | orchestrator | 2025-07-06 19:50:53.462189 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-07-06 19:50:53.462201 | orchestrator | Sunday 06 July 2025 19:50:52 +0000 (0:00:02.766) 0:05:16.583 *********** 2025-07-06 19:50:53.462212 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-07-06 19:50:53.462224 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-07-06 19:50:53.462235 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-07-06 19:50:53.462245 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-07-06 19:50:53.462256 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-07-06 19:50:53.462267 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-07-06 19:50:53.462277 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:50:53.462288 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-07-06 19:50:53.462299 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-07-06 19:50:53.462309 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-07-06 19:50:53.462320 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:50:53.462331 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-07-06 19:50:53.462341 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-07-06 19:50:53.462352 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-07-06 19:50:53.462363 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:50:53.462373 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-07-06 19:50:53.462384 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-07-06 19:50:53.462405 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-07-06 19:51:52.165071 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:51:52.165189 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-07-06 19:51:52.165205 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-07-06 19:51:52.165217 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-07-06 19:51:52.165228 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:51:52.165238 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:51:52.165249 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-07-06 19:51:52.165260 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-07-06 19:51:52.165271 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-07-06 19:51:52.165281 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:51:52.165293 | orchestrator | 2025-07-06 19:51:52.165304 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-07-06 19:51:52.165317 | orchestrator | Sunday 06 July 2025 19:50:53 +0000 (0:00:00.760) 0:05:17.343 *********** 2025-07-06 19:51:52.165328 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:52.165338 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.165349 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.165360 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.165371 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.165381 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.165416 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.165428 | orchestrator | 2025-07-06 19:51:52.165438 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-07-06 19:51:52.165449 | orchestrator | Sunday 06 July 2025 19:51:00 +0000 (0:00:06.402) 0:05:23.745 *********** 2025-07-06 19:51:52.165460 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:52.165470 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.165481 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.165491 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.165501 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.165512 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.165522 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.165535 | orchestrator | 2025-07-06 19:51:52.165547 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-07-06 19:51:52.165560 | orchestrator | Sunday 06 July 2025 19:51:01 +0000 (0:00:01.063) 0:05:24.808 *********** 2025-07-06 19:51:52.165572 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:52.165583 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.165596 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.165607 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.165619 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.165632 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.165643 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.165655 | orchestrator | 2025-07-06 19:51:52.165667 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-07-06 19:51:52.165679 | orchestrator | Sunday 06 July 2025 19:51:09 +0000 (0:00:08.066) 0:05:32.875 *********** 2025-07-06 19:51:52.165692 | orchestrator | changed: [testbed-manager] 2025-07-06 19:51:52.165704 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.165715 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.165725 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.165736 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.165747 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.165757 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.165768 | orchestrator | 2025-07-06 19:51:52.165779 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-07-06 19:51:52.165789 | orchestrator | Sunday 06 July 2025 19:51:12 +0000 (0:00:03.390) 0:05:36.265 *********** 2025-07-06 19:51:52.165800 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:52.165810 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.165821 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.165831 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.165842 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.165852 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.165878 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.165912 | orchestrator | 2025-07-06 19:51:52.165924 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-07-06 19:51:52.165935 | orchestrator | Sunday 06 July 2025 19:51:14 +0000 (0:00:01.508) 0:05:37.773 *********** 2025-07-06 19:51:52.165945 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:52.165956 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.165966 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.165977 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.165987 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.165998 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.166008 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.166079 | orchestrator | 2025-07-06 19:51:52.166092 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-07-06 19:51:52.166103 | orchestrator | Sunday 06 July 2025 19:51:15 +0000 (0:00:01.346) 0:05:39.120 *********** 2025-07-06 19:51:52.166114 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:51:52.166124 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:51:52.166135 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:51:52.166154 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:51:52.166165 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:51:52.166175 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:51:52.166218 | orchestrator | changed: [testbed-manager] 2025-07-06 19:51:52.166231 | orchestrator | 2025-07-06 19:51:52.166242 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-07-06 19:51:52.166253 | orchestrator | Sunday 06 July 2025 19:51:16 +0000 (0:00:00.580) 0:05:39.700 *********** 2025-07-06 19:51:52.166264 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:52.166274 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.166285 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.166296 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.166307 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.166317 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.166328 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.166339 | orchestrator | 2025-07-06 19:51:52.166350 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-07-06 19:51:52.166361 | orchestrator | Sunday 06 July 2025 19:51:26 +0000 (0:00:10.089) 0:05:49.789 *********** 2025-07-06 19:51:52.166372 | orchestrator | changed: [testbed-manager] 2025-07-06 19:51:52.166401 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.166413 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.166423 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.166433 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.166444 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.166454 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.166465 | orchestrator | 2025-07-06 19:51:52.166476 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-07-06 19:51:52.166487 | orchestrator | Sunday 06 July 2025 19:51:26 +0000 (0:00:00.872) 0:05:50.662 *********** 2025-07-06 19:51:52.166497 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:52.166508 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.166518 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.166529 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.166539 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.166549 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.166560 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.166570 | orchestrator | 2025-07-06 19:51:52.166581 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-07-06 19:51:52.166591 | orchestrator | Sunday 06 July 2025 19:51:35 +0000 (0:00:08.875) 0:05:59.538 *********** 2025-07-06 19:51:52.166602 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:52.166612 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.166623 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.166633 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.166644 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.166654 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.166665 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.166675 | orchestrator | 2025-07-06 19:51:52.166686 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-07-06 19:51:52.166696 | orchestrator | Sunday 06 July 2025 19:51:46 +0000 (0:00:10.218) 0:06:09.756 *********** 2025-07-06 19:51:52.166707 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-07-06 19:51:52.166718 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-07-06 19:51:52.166728 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-07-06 19:51:52.166739 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-07-06 19:51:52.166749 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-07-06 19:51:52.166760 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-07-06 19:51:52.166770 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-07-06 19:51:52.166781 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-07-06 19:51:52.166792 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-07-06 19:51:52.166809 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-07-06 19:51:52.166820 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-07-06 19:51:52.166831 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-07-06 19:51:52.166841 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-07-06 19:51:52.166852 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-07-06 19:51:52.166862 | orchestrator | 2025-07-06 19:51:52.166873 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-07-06 19:51:52.166904 | orchestrator | Sunday 06 July 2025 19:51:47 +0000 (0:00:01.155) 0:06:10.911 *********** 2025-07-06 19:51:52.166915 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:51:52.166926 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:51:52.166937 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:51:52.166948 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:51:52.166966 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:51:52.166985 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:51:52.167002 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:51:52.167020 | orchestrator | 2025-07-06 19:51:52.167038 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-07-06 19:51:52.167056 | orchestrator | Sunday 06 July 2025 19:51:47 +0000 (0:00:00.498) 0:06:11.410 *********** 2025-07-06 19:51:52.167085 | orchestrator | ok: [testbed-manager] 2025-07-06 19:51:52.167104 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:51:52.167122 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:51:52.167137 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:51:52.167148 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:51:52.167158 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:51:52.167169 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:51:52.167179 | orchestrator | 2025-07-06 19:51:52.167190 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-07-06 19:51:52.167202 | orchestrator | Sunday 06 July 2025 19:51:51 +0000 (0:00:03.629) 0:06:15.040 *********** 2025-07-06 19:51:52.167213 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:51:52.167223 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:51:52.167234 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:51:52.167244 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:51:52.167255 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:51:52.167265 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:51:52.167276 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:51:52.167286 | orchestrator | 2025-07-06 19:51:52.167297 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-07-06 19:51:52.167309 | orchestrator | Sunday 06 July 2025 19:51:51 +0000 (0:00:00.494) 0:06:15.534 *********** 2025-07-06 19:51:52.167319 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-07-06 19:51:52.167330 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-07-06 19:51:52.167341 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:51:52.167351 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-07-06 19:51:52.167361 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-07-06 19:51:52.167372 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:51:52.167383 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-07-06 19:51:52.167393 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-07-06 19:51:52.167404 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:51:52.167414 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-07-06 19:51:52.167433 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-07-06 19:52:10.910464 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:10.910594 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-07-06 19:52:10.910620 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-07-06 19:52:10.910674 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:10.910696 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-07-06 19:52:10.910714 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-07-06 19:52:10.910733 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:10.910752 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-07-06 19:52:10.910771 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-07-06 19:52:10.910790 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:10.910809 | orchestrator | 2025-07-06 19:52:10.910830 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-07-06 19:52:10.910851 | orchestrator | Sunday 06 July 2025 19:51:52 +0000 (0:00:00.549) 0:06:16.084 *********** 2025-07-06 19:52:10.910870 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:10.910889 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:10.910937 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:10.910949 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:10.910960 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:10.910971 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:10.910984 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:10.910996 | orchestrator | 2025-07-06 19:52:10.911009 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-07-06 19:52:10.911022 | orchestrator | Sunday 06 July 2025 19:51:52 +0000 (0:00:00.495) 0:06:16.579 *********** 2025-07-06 19:52:10.911036 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:10.911055 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:10.911075 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:10.911095 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:10.911114 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:10.911132 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:10.911152 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:10.911172 | orchestrator | 2025-07-06 19:52:10.911191 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-07-06 19:52:10.911205 | orchestrator | Sunday 06 July 2025 19:51:53 +0000 (0:00:00.495) 0:06:17.075 *********** 2025-07-06 19:52:10.911218 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:10.911231 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:10.911244 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:10.911261 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:10.911280 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:10.911300 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:10.911319 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:10.911334 | orchestrator | 2025-07-06 19:52:10.911345 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-07-06 19:52:10.911356 | orchestrator | Sunday 06 July 2025 19:51:54 +0000 (0:00:00.695) 0:06:17.770 *********** 2025-07-06 19:52:10.911367 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:10.911378 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:10.911389 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:10.911399 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:10.911410 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:10.911420 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:10.911431 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:10.911448 | orchestrator | 2025-07-06 19:52:10.911467 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-07-06 19:52:10.911486 | orchestrator | Sunday 06 July 2025 19:51:56 +0000 (0:00:01.936) 0:06:19.707 *********** 2025-07-06 19:52:10.911521 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:52:10.911539 | orchestrator | 2025-07-06 19:52:10.911558 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-07-06 19:52:10.911591 | orchestrator | Sunday 06 July 2025 19:51:56 +0000 (0:00:00.832) 0:06:20.539 *********** 2025-07-06 19:52:10.911611 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:10.911630 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:10.911648 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:10.911665 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:10.911676 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:10.911687 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:10.911697 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:10.911708 | orchestrator | 2025-07-06 19:52:10.911719 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-07-06 19:52:10.911729 | orchestrator | Sunday 06 July 2025 19:51:57 +0000 (0:00:00.813) 0:06:21.353 *********** 2025-07-06 19:52:10.911740 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:10.911750 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:10.911761 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:10.911772 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:10.911782 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:10.911792 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:10.911803 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:10.911813 | orchestrator | 2025-07-06 19:52:10.911824 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-07-06 19:52:10.911835 | orchestrator | Sunday 06 July 2025 19:51:58 +0000 (0:00:01.031) 0:06:22.385 *********** 2025-07-06 19:52:10.911846 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:10.911856 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:10.911867 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:10.911877 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:10.911888 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:10.911928 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:10.911940 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:10.911950 | orchestrator | 2025-07-06 19:52:10.911961 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-07-06 19:52:10.911972 | orchestrator | Sunday 06 July 2025 19:52:00 +0000 (0:00:01.352) 0:06:23.738 *********** 2025-07-06 19:52:10.912002 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:10.912014 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:10.912024 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:10.912035 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:10.912046 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:10.912056 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:10.912067 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:10.912077 | orchestrator | 2025-07-06 19:52:10.912088 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-07-06 19:52:10.912099 | orchestrator | Sunday 06 July 2025 19:52:01 +0000 (0:00:01.310) 0:06:25.048 *********** 2025-07-06 19:52:10.912110 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:10.912120 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:10.912131 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:10.912141 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:10.912152 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:10.912162 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:10.912173 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:10.912183 | orchestrator | 2025-07-06 19:52:10.912194 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-07-06 19:52:10.912204 | orchestrator | Sunday 06 July 2025 19:52:02 +0000 (0:00:01.259) 0:06:26.308 *********** 2025-07-06 19:52:10.912215 | orchestrator | changed: [testbed-manager] 2025-07-06 19:52:10.912226 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:10.912236 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:10.912247 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:10.912257 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:10.912268 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:10.912278 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:10.912299 | orchestrator | 2025-07-06 19:52:10.912310 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-07-06 19:52:10.912321 | orchestrator | Sunday 06 July 2025 19:52:03 +0000 (0:00:01.356) 0:06:27.664 *********** 2025-07-06 19:52:10.912331 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:52:10.912343 | orchestrator | 2025-07-06 19:52:10.912354 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-07-06 19:52:10.912364 | orchestrator | Sunday 06 July 2025 19:52:05 +0000 (0:00:01.041) 0:06:28.706 *********** 2025-07-06 19:52:10.912375 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:10.912386 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:10.912397 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:10.912407 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:10.912418 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:10.912429 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:10.912439 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:10.912450 | orchestrator | 2025-07-06 19:52:10.912461 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-07-06 19:52:10.912472 | orchestrator | Sunday 06 July 2025 19:52:06 +0000 (0:00:01.348) 0:06:30.054 *********** 2025-07-06 19:52:10.912483 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:10.912493 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:10.912504 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:10.912514 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:10.912525 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:10.912535 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:10.912546 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:10.912556 | orchestrator | 2025-07-06 19:52:10.912567 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-07-06 19:52:10.912578 | orchestrator | Sunday 06 July 2025 19:52:07 +0000 (0:00:01.096) 0:06:31.151 *********** 2025-07-06 19:52:10.912589 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:10.912599 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:10.912610 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:10.912621 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:10.912631 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:10.912641 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:10.912652 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:10.912663 | orchestrator | 2025-07-06 19:52:10.912674 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-07-06 19:52:10.912685 | orchestrator | Sunday 06 July 2025 19:52:08 +0000 (0:00:01.270) 0:06:32.422 *********** 2025-07-06 19:52:10.912696 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:10.912706 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:10.912717 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:10.912737 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:10.912748 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:10.912758 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:10.912769 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:10.912780 | orchestrator | 2025-07-06 19:52:10.912791 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-07-06 19:52:10.912801 | orchestrator | Sunday 06 July 2025 19:52:09 +0000 (0:00:01.107) 0:06:33.529 *********** 2025-07-06 19:52:10.912813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:52:10.912824 | orchestrator | 2025-07-06 19:52:10.912834 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:52:10.912845 | orchestrator | Sunday 06 July 2025 19:52:10 +0000 (0:00:00.792) 0:06:34.321 *********** 2025-07-06 19:52:10.912857 | orchestrator | 2025-07-06 19:52:10.912876 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:52:10.912964 | orchestrator | Sunday 06 July 2025 19:52:10 +0000 (0:00:00.035) 0:06:34.357 *********** 2025-07-06 19:52:10.912979 | orchestrator | 2025-07-06 19:52:10.912990 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:52:10.913005 | orchestrator | Sunday 06 July 2025 19:52:10 +0000 (0:00:00.039) 0:06:34.396 *********** 2025-07-06 19:52:10.913024 | orchestrator | 2025-07-06 19:52:10.913042 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:52:10.913062 | orchestrator | Sunday 06 July 2025 19:52:10 +0000 (0:00:00.035) 0:06:34.431 *********** 2025-07-06 19:52:10.913081 | orchestrator | 2025-07-06 19:52:10.913111 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:52:36.260249 | orchestrator | Sunday 06 July 2025 19:52:10 +0000 (0:00:00.034) 0:06:34.466 *********** 2025-07-06 19:52:36.260391 | orchestrator | 2025-07-06 19:52:36.260421 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:52:36.260442 | orchestrator | Sunday 06 July 2025 19:52:10 +0000 (0:00:00.038) 0:06:34.505 *********** 2025-07-06 19:52:36.260457 | orchestrator | 2025-07-06 19:52:36.260468 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-06 19:52:36.260479 | orchestrator | Sunday 06 July 2025 19:52:10 +0000 (0:00:00.035) 0:06:34.540 *********** 2025-07-06 19:52:36.260490 | orchestrator | 2025-07-06 19:52:36.260501 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-06 19:52:36.260512 | orchestrator | Sunday 06 July 2025 19:52:10 +0000 (0:00:00.034) 0:06:34.575 *********** 2025-07-06 19:52:36.260523 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:36.260535 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:36.260545 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:36.260556 | orchestrator | 2025-07-06 19:52:36.260567 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-07-06 19:52:36.260578 | orchestrator | Sunday 06 July 2025 19:52:12 +0000 (0:00:01.214) 0:06:35.789 *********** 2025-07-06 19:52:36.260588 | orchestrator | changed: [testbed-manager] 2025-07-06 19:52:36.260600 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:36.260611 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:36.260622 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:36.260632 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:36.260643 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:36.260654 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:36.260665 | orchestrator | 2025-07-06 19:52:36.260676 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-07-06 19:52:36.260687 | orchestrator | Sunday 06 July 2025 19:52:13 +0000 (0:00:01.190) 0:06:36.980 *********** 2025-07-06 19:52:36.260698 | orchestrator | changed: [testbed-manager] 2025-07-06 19:52:36.260709 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:36.260720 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:36.260731 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:36.260741 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:36.260752 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:36.260763 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:36.260774 | orchestrator | 2025-07-06 19:52:36.260786 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-07-06 19:52:36.260798 | orchestrator | Sunday 06 July 2025 19:52:14 +0000 (0:00:01.056) 0:06:38.036 *********** 2025-07-06 19:52:36.260811 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:36.260823 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:36.260835 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:36.260847 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:36.260859 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:36.260871 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:36.260884 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:36.260896 | orchestrator | 2025-07-06 19:52:36.260945 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-07-06 19:52:36.260989 | orchestrator | Sunday 06 July 2025 19:52:16 +0000 (0:00:02.262) 0:06:40.298 *********** 2025-07-06 19:52:36.261002 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:36.261015 | orchestrator | 2025-07-06 19:52:36.261027 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-07-06 19:52:36.261040 | orchestrator | Sunday 06 July 2025 19:52:16 +0000 (0:00:00.107) 0:06:40.406 *********** 2025-07-06 19:52:36.261052 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:36.261064 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:36.261077 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:36.261089 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:36.261102 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:36.261114 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:36.261142 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:36.261153 | orchestrator | 2025-07-06 19:52:36.261164 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-07-06 19:52:36.261175 | orchestrator | Sunday 06 July 2025 19:52:17 +0000 (0:00:00.960) 0:06:41.366 *********** 2025-07-06 19:52:36.261186 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:36.261196 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:36.261207 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:36.261217 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:36.261228 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:36.261239 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:36.261249 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:36.261260 | orchestrator | 2025-07-06 19:52:36.261271 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-07-06 19:52:36.261281 | orchestrator | Sunday 06 July 2025 19:52:18 +0000 (0:00:00.711) 0:06:42.078 *********** 2025-07-06 19:52:36.261293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:52:36.261307 | orchestrator | 2025-07-06 19:52:36.261318 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-07-06 19:52:36.261328 | orchestrator | Sunday 06 July 2025 19:52:19 +0000 (0:00:00.859) 0:06:42.938 *********** 2025-07-06 19:52:36.261339 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:36.261350 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:36.261360 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:36.261371 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:36.261382 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:36.261393 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:36.261403 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:36.261422 | orchestrator | 2025-07-06 19:52:36.261441 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-07-06 19:52:36.261459 | orchestrator | Sunday 06 July 2025 19:52:20 +0000 (0:00:00.783) 0:06:43.721 *********** 2025-07-06 19:52:36.261477 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-07-06 19:52:36.261495 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-07-06 19:52:36.261537 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-07-06 19:52:36.261558 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-07-06 19:52:36.261576 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-07-06 19:52:36.261593 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-07-06 19:52:36.261611 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-07-06 19:52:36.261630 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-07-06 19:52:36.261647 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-07-06 19:52:36.261667 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-07-06 19:52:36.261685 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-07-06 19:52:36.261727 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-07-06 19:52:36.261747 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-07-06 19:52:36.261764 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-07-06 19:52:36.261775 | orchestrator | 2025-07-06 19:52:36.261786 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-07-06 19:52:36.261797 | orchestrator | Sunday 06 July 2025 19:52:22 +0000 (0:00:02.586) 0:06:46.308 *********** 2025-07-06 19:52:36.261808 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:36.261819 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:36.261829 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:36.261840 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:36.261850 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:36.261861 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:36.261871 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:36.261882 | orchestrator | 2025-07-06 19:52:36.261893 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-07-06 19:52:36.261904 | orchestrator | Sunday 06 July 2025 19:52:23 +0000 (0:00:00.482) 0:06:46.791 *********** 2025-07-06 19:52:36.261944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:52:36.261958 | orchestrator | 2025-07-06 19:52:36.261969 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-07-06 19:52:36.261979 | orchestrator | Sunday 06 July 2025 19:52:23 +0000 (0:00:00.782) 0:06:47.573 *********** 2025-07-06 19:52:36.261990 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:36.262001 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:36.262075 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:36.262091 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:36.262102 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:36.262113 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:36.262124 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:36.262135 | orchestrator | 2025-07-06 19:52:36.262146 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-07-06 19:52:36.262156 | orchestrator | Sunday 06 July 2025 19:52:25 +0000 (0:00:01.142) 0:06:48.715 *********** 2025-07-06 19:52:36.262167 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:36.262178 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:36.262188 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:36.262199 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:36.262209 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:36.262220 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:36.262230 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:36.262241 | orchestrator | 2025-07-06 19:52:36.262252 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-07-06 19:52:36.262262 | orchestrator | Sunday 06 July 2025 19:52:25 +0000 (0:00:00.809) 0:06:49.525 *********** 2025-07-06 19:52:36.262273 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:36.262291 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:36.262302 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:36.262313 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:36.262324 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:36.262334 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:36.262345 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:36.262355 | orchestrator | 2025-07-06 19:52:36.262366 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-07-06 19:52:36.262377 | orchestrator | Sunday 06 July 2025 19:52:26 +0000 (0:00:00.553) 0:06:50.078 *********** 2025-07-06 19:52:36.262387 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:36.262398 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:52:36.262409 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:52:36.262419 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:52:36.262439 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:52:36.262450 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:52:36.262461 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:52:36.262471 | orchestrator | 2025-07-06 19:52:36.262482 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-07-06 19:52:36.262493 | orchestrator | Sunday 06 July 2025 19:52:27 +0000 (0:00:01.401) 0:06:51.480 *********** 2025-07-06 19:52:36.262503 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:52:36.262514 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:52:36.262525 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:52:36.262535 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:52:36.262546 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:52:36.262557 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:52:36.262568 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:52:36.262578 | orchestrator | 2025-07-06 19:52:36.262589 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-07-06 19:52:36.262600 | orchestrator | Sunday 06 July 2025 19:52:28 +0000 (0:00:00.499) 0:06:51.979 *********** 2025-07-06 19:52:36.262611 | orchestrator | ok: [testbed-manager] 2025-07-06 19:52:36.262621 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:52:36.262632 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:52:36.262643 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:52:36.262653 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:52:36.262664 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:52:36.262681 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:52:36.262700 | orchestrator | 2025-07-06 19:52:36.262736 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-07-06 19:53:08.170989 | orchestrator | Sunday 06 July 2025 19:52:36 +0000 (0:00:07.951) 0:06:59.930 *********** 2025-07-06 19:53:08.171088 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.171101 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:08.171112 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:08.171121 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:08.171129 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:08.171138 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:08.171147 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:08.171156 | orchestrator | 2025-07-06 19:53:08.171166 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-07-06 19:53:08.171175 | orchestrator | Sunday 06 July 2025 19:52:37 +0000 (0:00:01.330) 0:07:01.261 *********** 2025-07-06 19:53:08.171184 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.171193 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:08.171202 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:08.171211 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:08.171219 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:08.171228 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:08.171236 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:08.171245 | orchestrator | 2025-07-06 19:53:08.171254 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-07-06 19:53:08.171264 | orchestrator | Sunday 06 July 2025 19:52:39 +0000 (0:00:01.665) 0:07:02.927 *********** 2025-07-06 19:53:08.171273 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.171281 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:08.171290 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:08.171299 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:08.171307 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:08.171316 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:08.171324 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:08.171333 | orchestrator | 2025-07-06 19:53:08.171342 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-06 19:53:08.171351 | orchestrator | Sunday 06 July 2025 19:52:40 +0000 (0:00:01.640) 0:07:04.567 *********** 2025-07-06 19:53:08.171360 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.171368 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:08.171397 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:08.171406 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:08.171415 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:08.171423 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:08.171432 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:08.171440 | orchestrator | 2025-07-06 19:53:08.171449 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-06 19:53:08.171458 | orchestrator | Sunday 06 July 2025 19:52:41 +0000 (0:00:01.059) 0:07:05.627 *********** 2025-07-06 19:53:08.171466 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:53:08.171475 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:53:08.171486 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:53:08.171496 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:53:08.171506 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:53:08.171516 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:53:08.171525 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:53:08.171535 | orchestrator | 2025-07-06 19:53:08.171546 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-07-06 19:53:08.171556 | orchestrator | Sunday 06 July 2025 19:52:42 +0000 (0:00:00.818) 0:07:06.446 *********** 2025-07-06 19:53:08.171566 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:53:08.171576 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:53:08.171587 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:53:08.171597 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:53:08.171607 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:53:08.171617 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:53:08.171627 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:53:08.171637 | orchestrator | 2025-07-06 19:53:08.171647 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-07-06 19:53:08.171672 | orchestrator | Sunday 06 July 2025 19:52:43 +0000 (0:00:00.496) 0:07:06.942 *********** 2025-07-06 19:53:08.171682 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.171693 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:08.171703 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:08.171713 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:08.171722 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:08.171732 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:08.171741 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:08.171752 | orchestrator | 2025-07-06 19:53:08.171762 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-07-06 19:53:08.171773 | orchestrator | Sunday 06 July 2025 19:52:43 +0000 (0:00:00.705) 0:07:07.648 *********** 2025-07-06 19:53:08.171783 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.171793 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:08.171802 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:08.171812 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:08.171822 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:08.171832 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:08.171843 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:08.171851 | orchestrator | 2025-07-06 19:53:08.171860 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-07-06 19:53:08.171869 | orchestrator | Sunday 06 July 2025 19:52:44 +0000 (0:00:00.498) 0:07:08.146 *********** 2025-07-06 19:53:08.171877 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.171885 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:08.171894 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:08.171902 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:08.171911 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:08.171919 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:08.171945 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:08.171954 | orchestrator | 2025-07-06 19:53:08.171963 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-07-06 19:53:08.171972 | orchestrator | Sunday 06 July 2025 19:52:44 +0000 (0:00:00.521) 0:07:08.667 *********** 2025-07-06 19:53:08.171980 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.171997 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:08.172005 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:08.172014 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:08.172022 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:08.172031 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:08.172039 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:08.172048 | orchestrator | 2025-07-06 19:53:08.172057 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-07-06 19:53:08.172080 | orchestrator | Sunday 06 July 2025 19:52:50 +0000 (0:00:05.644) 0:07:14.312 *********** 2025-07-06 19:53:08.172090 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:53:08.172098 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:53:08.172107 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:53:08.172116 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:53:08.172124 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:53:08.172133 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:53:08.172142 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:53:08.172150 | orchestrator | 2025-07-06 19:53:08.172159 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-07-06 19:53:08.172168 | orchestrator | Sunday 06 July 2025 19:52:51 +0000 (0:00:00.464) 0:07:14.777 *********** 2025-07-06 19:53:08.172179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:53:08.172190 | orchestrator | 2025-07-06 19:53:08.172199 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-07-06 19:53:08.172208 | orchestrator | Sunday 06 July 2025 19:52:52 +0000 (0:00:00.941) 0:07:15.718 *********** 2025-07-06 19:53:08.172217 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.172225 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:08.172234 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:08.172243 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:08.172251 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:08.172260 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:08.172268 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:08.172277 | orchestrator | 2025-07-06 19:53:08.172286 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-07-06 19:53:08.172295 | orchestrator | Sunday 06 July 2025 19:52:54 +0000 (0:00:02.043) 0:07:17.762 *********** 2025-07-06 19:53:08.172303 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.172312 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:08.172320 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:08.172329 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:08.172337 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:08.172346 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:08.172355 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:08.172363 | orchestrator | 2025-07-06 19:53:08.172372 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-07-06 19:53:08.172381 | orchestrator | Sunday 06 July 2025 19:52:55 +0000 (0:00:01.182) 0:07:18.944 *********** 2025-07-06 19:53:08.172389 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:08.172398 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:08.172406 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:08.172415 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:08.172423 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:08.172432 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:08.172440 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:08.172449 | orchestrator | 2025-07-06 19:53:08.172458 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-07-06 19:53:08.172466 | orchestrator | Sunday 06 July 2025 19:52:56 +0000 (0:00:01.120) 0:07:20.065 *********** 2025-07-06 19:53:08.172476 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:53:08.172493 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:53:08.172502 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:53:08.172515 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:53:08.172524 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:53:08.172533 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:53:08.172542 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-06 19:53:08.172550 | orchestrator | 2025-07-06 19:53:08.172559 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-07-06 19:53:08.172568 | orchestrator | Sunday 06 July 2025 19:52:58 +0000 (0:00:01.711) 0:07:21.777 *********** 2025-07-06 19:53:08.172577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:53:08.172586 | orchestrator | 2025-07-06 19:53:08.172595 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-07-06 19:53:08.172604 | orchestrator | Sunday 06 July 2025 19:52:58 +0000 (0:00:00.766) 0:07:22.543 *********** 2025-07-06 19:53:08.172613 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:08.172622 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:08.172630 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:08.172639 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:08.172648 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:08.172656 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:08.172665 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:08.172674 | orchestrator | 2025-07-06 19:53:08.172683 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-07-06 19:53:08.172696 | orchestrator | Sunday 06 July 2025 19:53:08 +0000 (0:00:09.297) 0:07:31.840 *********** 2025-07-06 19:53:24.505467 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:24.505601 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:24.505629 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:24.505647 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:24.505664 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:24.505682 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:24.505699 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:24.505717 | orchestrator | 2025-07-06 19:53:24.505738 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-07-06 19:53:24.505758 | orchestrator | Sunday 06 July 2025 19:53:09 +0000 (0:00:01.650) 0:07:33.491 *********** 2025-07-06 19:53:24.505779 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:24.505797 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:24.505815 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:24.505834 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:24.505852 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:24.505871 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:24.505889 | orchestrator | 2025-07-06 19:53:24.505906 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-07-06 19:53:24.505925 | orchestrator | Sunday 06 July 2025 19:53:11 +0000 (0:00:01.274) 0:07:34.765 *********** 2025-07-06 19:53:24.506098 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:24.506131 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:24.506151 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:24.506164 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:24.506206 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:24.506220 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:24.506232 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:24.506244 | orchestrator | 2025-07-06 19:53:24.506257 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-07-06 19:53:24.506270 | orchestrator | 2025-07-06 19:53:24.506283 | orchestrator | TASK [Include hardening role] ************************************************** 2025-07-06 19:53:24.506295 | orchestrator | Sunday 06 July 2025 19:53:12 +0000 (0:00:01.465) 0:07:36.230 *********** 2025-07-06 19:53:24.506308 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:53:24.506321 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:53:24.506333 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:53:24.506346 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:53:24.506357 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:53:24.506368 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:53:24.506379 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:53:24.506389 | orchestrator | 2025-07-06 19:53:24.506400 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-07-06 19:53:24.506411 | orchestrator | 2025-07-06 19:53:24.506422 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-07-06 19:53:24.506433 | orchestrator | Sunday 06 July 2025 19:53:13 +0000 (0:00:00.514) 0:07:36.745 *********** 2025-07-06 19:53:24.506443 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:24.506454 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:24.506465 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:24.506475 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:24.506486 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:24.506497 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:24.506508 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:24.506518 | orchestrator | 2025-07-06 19:53:24.506529 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-07-06 19:53:24.506540 | orchestrator | Sunday 06 July 2025 19:53:14 +0000 (0:00:01.299) 0:07:38.045 *********** 2025-07-06 19:53:24.506551 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:24.506562 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:24.506572 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:24.506583 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:24.506594 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:24.506604 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:24.506615 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:24.506625 | orchestrator | 2025-07-06 19:53:24.506638 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-07-06 19:53:24.506658 | orchestrator | Sunday 06 July 2025 19:53:16 +0000 (0:00:02.120) 0:07:40.165 *********** 2025-07-06 19:53:24.506678 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:53:24.506696 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:53:24.506714 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:53:24.506730 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:53:24.506748 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:53:24.506766 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:53:24.506786 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:53:24.506804 | orchestrator | 2025-07-06 19:53:24.506824 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-07-06 19:53:24.506836 | orchestrator | Sunday 06 July 2025 19:53:17 +0000 (0:00:00.973) 0:07:41.139 *********** 2025-07-06 19:53:24.506847 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:24.506858 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:24.506869 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:24.506879 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:24.506890 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:24.506900 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:24.506911 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:24.506922 | orchestrator | 2025-07-06 19:53:24.506932 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-07-06 19:53:24.506996 | orchestrator | 2025-07-06 19:53:24.507008 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-07-06 19:53:24.507066 | orchestrator | Sunday 06 July 2025 19:53:18 +0000 (0:00:01.192) 0:07:42.332 *********** 2025-07-06 19:53:24.507079 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:53:24.507092 | orchestrator | 2025-07-06 19:53:24.507103 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-06 19:53:24.507114 | orchestrator | Sunday 06 July 2025 19:53:19 +0000 (0:00:00.922) 0:07:43.254 *********** 2025-07-06 19:53:24.507125 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:24.507136 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:24.507147 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:24.507158 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:24.507168 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:24.507179 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:24.507190 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:24.507201 | orchestrator | 2025-07-06 19:53:24.507236 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-06 19:53:24.507248 | orchestrator | Sunday 06 July 2025 19:53:20 +0000 (0:00:00.804) 0:07:44.059 *********** 2025-07-06 19:53:24.507259 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:24.507270 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:24.507280 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:24.507291 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:24.507302 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:24.507313 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:24.507324 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:24.507335 | orchestrator | 2025-07-06 19:53:24.507346 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-07-06 19:53:24.507357 | orchestrator | Sunday 06 July 2025 19:53:21 +0000 (0:00:01.116) 0:07:45.176 *********** 2025-07-06 19:53:24.507371 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:53:24.507394 | orchestrator | 2025-07-06 19:53:24.507421 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-06 19:53:24.507439 | orchestrator | Sunday 06 July 2025 19:53:22 +0000 (0:00:01.004) 0:07:46.181 *********** 2025-07-06 19:53:24.507456 | orchestrator | ok: [testbed-manager] 2025-07-06 19:53:24.507474 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:53:24.507490 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:53:24.507508 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:53:24.507525 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:53:24.507543 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:53:24.507561 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:53:24.507579 | orchestrator | 2025-07-06 19:53:24.507598 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-06 19:53:24.507616 | orchestrator | Sunday 06 July 2025 19:53:23 +0000 (0:00:00.881) 0:07:47.062 *********** 2025-07-06 19:53:24.507635 | orchestrator | changed: [testbed-manager] 2025-07-06 19:53:24.507651 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:53:24.507662 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:53:24.507673 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:53:24.507684 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:53:24.507694 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:53:24.507705 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:53:24.507715 | orchestrator | 2025-07-06 19:53:24.507726 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:53:24.507738 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-07-06 19:53:24.507750 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-06 19:53:24.507774 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-06 19:53:24.507785 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-06 19:53:24.507795 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-06 19:53:24.507813 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-06 19:53:24.507824 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-07-06 19:53:24.507835 | orchestrator | 2025-07-06 19:53:24.507845 | orchestrator | 2025-07-06 19:53:24.507856 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:53:24.507867 | orchestrator | Sunday 06 July 2025 19:53:24 +0000 (0:00:01.100) 0:07:48.163 *********** 2025-07-06 19:53:24.507878 | orchestrator | =============================================================================== 2025-07-06 19:53:24.507888 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.03s 2025-07-06 19:53:24.507899 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.11s 2025-07-06 19:53:24.507910 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.01s 2025-07-06 19:53:24.507920 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.93s 2025-07-06 19:53:24.507931 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.53s 2025-07-06 19:53:24.507969 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.24s 2025-07-06 19:53:24.507980 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.22s 2025-07-06 19:53:24.507990 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.09s 2025-07-06 19:53:24.508001 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.30s 2025-07-06 19:53:24.508012 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.88s 2025-07-06 19:53:24.508023 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.60s 2025-07-06 19:53:24.508034 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.27s 2025-07-06 19:53:24.508044 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.07s 2025-07-06 19:53:24.508055 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.95s 2025-07-06 19:53:24.508077 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.91s 2025-07-06 19:53:24.983765 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.38s 2025-07-06 19:53:24.983867 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.40s 2025-07-06 19:53:24.983881 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.72s 2025-07-06 19:53:24.983893 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.70s 2025-07-06 19:53:24.983904 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.64s 2025-07-06 19:53:25.268154 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-06 19:53:25.268253 | orchestrator | + osism apply network 2025-07-06 19:53:37.839888 | orchestrator | 2025-07-06 19:53:37 | INFO  | Task 4091c5e1-4d8c-4478-a8a1-6e97704c6bda (network) was prepared for execution. 2025-07-06 19:53:37.840080 | orchestrator | 2025-07-06 19:53:37 | INFO  | It takes a moment until task 4091c5e1-4d8c-4478-a8a1-6e97704c6bda (network) has been started and output is visible here. 2025-07-06 19:54:05.842233 | orchestrator | 2025-07-06 19:54:05.842311 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-07-06 19:54:05.842319 | orchestrator | 2025-07-06 19:54:05.842323 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-07-06 19:54:05.842328 | orchestrator | Sunday 06 July 2025 19:53:42 +0000 (0:00:00.268) 0:00:00.268 *********** 2025-07-06 19:54:05.842332 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:05.842337 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:54:05.842341 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:54:05.842345 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:54:05.842348 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:54:05.842352 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:54:05.842356 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:54:05.842360 | orchestrator | 2025-07-06 19:54:05.842364 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-07-06 19:54:05.842368 | orchestrator | Sunday 06 July 2025 19:53:42 +0000 (0:00:00.678) 0:00:00.946 *********** 2025-07-06 19:54:05.842374 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:54:05.842380 | orchestrator | 2025-07-06 19:54:05.842384 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-07-06 19:54:05.842387 | orchestrator | Sunday 06 July 2025 19:53:43 +0000 (0:00:01.240) 0:00:02.187 *********** 2025-07-06 19:54:05.842392 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:05.842396 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:54:05.842400 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:54:05.842403 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:54:05.842407 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:54:05.842411 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:54:05.842415 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:54:05.842418 | orchestrator | 2025-07-06 19:54:05.842422 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-07-06 19:54:05.842426 | orchestrator | Sunday 06 July 2025 19:53:45 +0000 (0:00:01.942) 0:00:04.130 *********** 2025-07-06 19:54:05.842430 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:05.842433 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:54:05.842437 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:54:05.842441 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:54:05.842445 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:54:05.842449 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:54:05.842452 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:54:05.842456 | orchestrator | 2025-07-06 19:54:05.842460 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-07-06 19:54:05.842464 | orchestrator | Sunday 06 July 2025 19:53:47 +0000 (0:00:01.688) 0:00:05.818 *********** 2025-07-06 19:54:05.842468 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-07-06 19:54:05.842473 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-07-06 19:54:05.842476 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-07-06 19:54:05.842480 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-07-06 19:54:05.842484 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-07-06 19:54:05.842488 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-07-06 19:54:05.842492 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-07-06 19:54:05.842496 | orchestrator | 2025-07-06 19:54:05.842499 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-07-06 19:54:05.842503 | orchestrator | Sunday 06 July 2025 19:53:48 +0000 (0:00:00.973) 0:00:06.791 *********** 2025-07-06 19:54:05.842507 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 19:54:05.842512 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 19:54:05.842516 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-06 19:54:05.842534 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-06 19:54:05.842538 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 19:54:05.842542 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-06 19:54:05.842545 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-06 19:54:05.842549 | orchestrator | 2025-07-06 19:54:05.842553 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-07-06 19:54:05.842557 | orchestrator | Sunday 06 July 2025 19:53:51 +0000 (0:00:03.197) 0:00:09.989 *********** 2025-07-06 19:54:05.842561 | orchestrator | changed: [testbed-manager] 2025-07-06 19:54:05.842565 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:54:05.842568 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:54:05.842572 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:54:05.842576 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:54:05.842580 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:54:05.842583 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:54:05.842587 | orchestrator | 2025-07-06 19:54:05.842591 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-07-06 19:54:05.842595 | orchestrator | Sunday 06 July 2025 19:53:53 +0000 (0:00:01.464) 0:00:11.453 *********** 2025-07-06 19:54:05.842598 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 19:54:05.842602 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-06 19:54:05.842606 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 19:54:05.842610 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 19:54:05.842614 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-06 19:54:05.842618 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-06 19:54:05.842621 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-06 19:54:05.842625 | orchestrator | 2025-07-06 19:54:05.842629 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-07-06 19:54:05.842633 | orchestrator | Sunday 06 July 2025 19:53:55 +0000 (0:00:01.952) 0:00:13.406 *********** 2025-07-06 19:54:05.842637 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:05.842640 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:54:05.842644 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:54:05.842648 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:54:05.842652 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:54:05.842655 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:54:05.842659 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:54:05.842663 | orchestrator | 2025-07-06 19:54:05.842667 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-07-06 19:54:05.842680 | orchestrator | Sunday 06 July 2025 19:53:56 +0000 (0:00:01.203) 0:00:14.609 *********** 2025-07-06 19:54:05.842684 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:54:05.842688 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:54:05.842692 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:54:05.842696 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:54:05.842699 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:54:05.842703 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:54:05.842707 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:54:05.842711 | orchestrator | 2025-07-06 19:54:05.842714 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-07-06 19:54:05.842718 | orchestrator | Sunday 06 July 2025 19:53:57 +0000 (0:00:00.667) 0:00:15.277 *********** 2025-07-06 19:54:05.842722 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:05.842726 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:54:05.842729 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:54:05.842733 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:54:05.842737 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:54:05.842740 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:54:05.842744 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:54:05.842748 | orchestrator | 2025-07-06 19:54:05.842752 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-07-06 19:54:05.842755 | orchestrator | Sunday 06 July 2025 19:53:59 +0000 (0:00:02.127) 0:00:17.404 *********** 2025-07-06 19:54:05.842763 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:54:05.842766 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:54:05.842770 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:54:05.842774 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:54:05.842778 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:54:05.842781 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:54:05.842786 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-07-06 19:54:05.842791 | orchestrator | 2025-07-06 19:54:05.842795 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-07-06 19:54:05.842800 | orchestrator | Sunday 06 July 2025 19:54:00 +0000 (0:00:00.882) 0:00:18.287 *********** 2025-07-06 19:54:05.842804 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:05.842809 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:54:05.842813 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:54:05.842818 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:54:05.842822 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:54:05.842827 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:54:05.842831 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:54:05.842835 | orchestrator | 2025-07-06 19:54:05.842850 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-07-06 19:54:05.842855 | orchestrator | Sunday 06 July 2025 19:54:01 +0000 (0:00:01.603) 0:00:19.890 *********** 2025-07-06 19:54:05.842860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:54:05.842866 | orchestrator | 2025-07-06 19:54:05.842870 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-06 19:54:05.842874 | orchestrator | Sunday 06 July 2025 19:54:02 +0000 (0:00:01.254) 0:00:21.145 *********** 2025-07-06 19:54:05.842879 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:05.842883 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:54:05.842888 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:54:05.842892 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:54:05.842896 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:54:05.842901 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:54:05.842905 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:54:05.842909 | orchestrator | 2025-07-06 19:54:05.842914 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-07-06 19:54:05.842918 | orchestrator | Sunday 06 July 2025 19:54:03 +0000 (0:00:00.930) 0:00:22.076 *********** 2025-07-06 19:54:05.842922 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:05.842927 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:54:05.842931 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:54:05.842935 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:54:05.842940 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:54:05.842944 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:54:05.842948 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:54:05.842952 | orchestrator | 2025-07-06 19:54:05.842982 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-06 19:54:05.842987 | orchestrator | Sunday 06 July 2025 19:54:04 +0000 (0:00:00.780) 0:00:22.856 *********** 2025-07-06 19:54:05.842991 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:54:05.842996 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:54:05.843000 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:54:05.843004 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:54:05.843009 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:54:05.843013 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:54:05.843017 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:54:05.843025 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:54:05.843029 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:54:05.843034 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:54:05.843038 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:54:05.843043 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:54:05.843047 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-06 19:54:05.843052 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-07-06 19:54:05.843056 | orchestrator | 2025-07-06 19:54:05.843063 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-07-06 19:54:22.280681 | orchestrator | Sunday 06 July 2025 19:54:05 +0000 (0:00:01.154) 0:00:24.010 *********** 2025-07-06 19:54:22.280800 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:54:22.280817 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:54:22.280829 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:54:22.280840 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:54:22.280851 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:54:22.280861 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:54:22.280872 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:54:22.280883 | orchestrator | 2025-07-06 19:54:22.280895 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-07-06 19:54:22.280906 | orchestrator | Sunday 06 July 2025 19:54:06 +0000 (0:00:00.664) 0:00:24.675 *********** 2025-07-06 19:54:22.280919 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-4, testbed-node-3, testbed-node-1, testbed-node-2, testbed-node-5 2025-07-06 19:54:22.280933 | orchestrator | 2025-07-06 19:54:22.280944 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-07-06 19:54:22.280955 | orchestrator | Sunday 06 July 2025 19:54:11 +0000 (0:00:04.710) 0:00:29.385 *********** 2025-07-06 19:54:22.281019 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281035 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281076 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281087 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281142 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281252 | orchestrator | 2025-07-06 19:54:22.281265 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-07-06 19:54:22.281277 | orchestrator | Sunday 06 July 2025 19:54:16 +0000 (0:00:05.660) 0:00:35.046 *********** 2025-07-06 19:54:22.281290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281303 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281360 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281393 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281405 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-06 19:54:22.281418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281443 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:22.281463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:28.253330 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-06 19:54:28.253436 | orchestrator | 2025-07-06 19:54:28.253452 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-07-06 19:54:28.253466 | orchestrator | Sunday 06 July 2025 19:54:22 +0000 (0:00:05.402) 0:00:40.448 *********** 2025-07-06 19:54:28.253479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:54:28.253491 | orchestrator | 2025-07-06 19:54:28.253502 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-06 19:54:28.253513 | orchestrator | Sunday 06 July 2025 19:54:23 +0000 (0:00:01.129) 0:00:41.578 *********** 2025-07-06 19:54:28.253525 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:28.253537 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:54:28.253548 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:54:28.253558 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:54:28.253569 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:54:28.253579 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:54:28.253590 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:54:28.253601 | orchestrator | 2025-07-06 19:54:28.253612 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-06 19:54:28.253622 | orchestrator | Sunday 06 July 2025 19:54:24 +0000 (0:00:01.133) 0:00:42.711 *********** 2025-07-06 19:54:28.253656 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:54:28.253668 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:54:28.253679 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:54:28.253690 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:54:28.253700 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:54:28.253729 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:54:28.253741 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:54:28.253752 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:54:28.253763 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:54:28.253773 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:54:28.253784 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:54:28.253795 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:54:28.253805 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:54:28.253816 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:54:28.253826 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:54:28.253837 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:54:28.253848 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:54:28.253858 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:54:28.253871 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:54:28.253883 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:54:28.253896 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:54:28.253907 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:54:28.253948 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:54:28.253961 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:54:28.254072 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:54:28.254085 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:54:28.254098 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:54:28.254111 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:54:28.254123 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:54:28.254134 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:54:28.254145 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-06 19:54:28.254155 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-06 19:54:28.254166 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-06 19:54:28.254176 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-06 19:54:28.254187 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:54:28.254198 | orchestrator | 2025-07-06 19:54:28.254208 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-07-06 19:54:28.254236 | orchestrator | Sunday 06 July 2025 19:54:26 +0000 (0:00:02.049) 0:00:44.761 *********** 2025-07-06 19:54:28.254248 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:54:28.254269 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:54:28.254280 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:54:28.254291 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:54:28.254301 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:54:28.254312 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:54:28.254323 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:54:28.254333 | orchestrator | 2025-07-06 19:54:28.254344 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-07-06 19:54:28.254355 | orchestrator | Sunday 06 July 2025 19:54:27 +0000 (0:00:00.635) 0:00:45.397 *********** 2025-07-06 19:54:28.254365 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:54:28.254376 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:54:28.254386 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:54:28.254397 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:54:28.254407 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:54:28.254417 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:54:28.254428 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:54:28.254438 | orchestrator | 2025-07-06 19:54:28.254449 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:54:28.254461 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 19:54:28.254473 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:54:28.254484 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:54:28.254495 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:54:28.254512 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:54:28.254523 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:54:28.254533 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 19:54:28.254544 | orchestrator | 2025-07-06 19:54:28.254555 | orchestrator | 2025-07-06 19:54:28.254566 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:54:28.254576 | orchestrator | Sunday 06 July 2025 19:54:27 +0000 (0:00:00.684) 0:00:46.082 *********** 2025-07-06 19:54:28.254587 | orchestrator | =============================================================================== 2025-07-06 19:54:28.254598 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.66s 2025-07-06 19:54:28.254608 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.40s 2025-07-06 19:54:28.254619 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.71s 2025-07-06 19:54:28.254630 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.20s 2025-07-06 19:54:28.254640 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.13s 2025-07-06 19:54:28.254651 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.05s 2025-07-06 19:54:28.254661 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.95s 2025-07-06 19:54:28.254672 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.94s 2025-07-06 19:54:28.254683 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.69s 2025-07-06 19:54:28.254693 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.60s 2025-07-06 19:54:28.254710 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.46s 2025-07-06 19:54:28.254721 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.25s 2025-07-06 19:54:28.254732 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.24s 2025-07-06 19:54:28.254742 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.20s 2025-07-06 19:54:28.254753 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.15s 2025-07-06 19:54:28.254763 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.13s 2025-07-06 19:54:28.254774 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.13s 2025-07-06 19:54:28.254784 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2025-07-06 19:54:28.254795 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.93s 2025-07-06 19:54:28.254806 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.88s 2025-07-06 19:54:28.516534 | orchestrator | + osism apply wireguard 2025-07-06 19:54:40.353300 | orchestrator | 2025-07-06 19:54:40 | INFO  | Task 22e34a2c-d6a6-40b1-855b-f25e95911cc5 (wireguard) was prepared for execution. 2025-07-06 19:54:40.353407 | orchestrator | 2025-07-06 19:54:40 | INFO  | It takes a moment until task 22e34a2c-d6a6-40b1-855b-f25e95911cc5 (wireguard) has been started and output is visible here. 2025-07-06 19:54:58.616708 | orchestrator | 2025-07-06 19:54:58.616822 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-07-06 19:54:58.616838 | orchestrator | 2025-07-06 19:54:58.616851 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-07-06 19:54:58.616863 | orchestrator | Sunday 06 July 2025 19:54:44 +0000 (0:00:00.169) 0:00:00.169 *********** 2025-07-06 19:54:58.616874 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:58.616886 | orchestrator | 2025-07-06 19:54:58.616898 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-07-06 19:54:58.616909 | orchestrator | Sunday 06 July 2025 19:54:45 +0000 (0:00:01.163) 0:00:01.333 *********** 2025-07-06 19:54:58.616920 | orchestrator | changed: [testbed-manager] 2025-07-06 19:54:58.616932 | orchestrator | 2025-07-06 19:54:58.616943 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-07-06 19:54:58.616954 | orchestrator | Sunday 06 July 2025 19:54:51 +0000 (0:00:05.776) 0:00:07.109 *********** 2025-07-06 19:54:58.616965 | orchestrator | changed: [testbed-manager] 2025-07-06 19:54:58.616976 | orchestrator | 2025-07-06 19:54:58.617014 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-07-06 19:54:58.617026 | orchestrator | Sunday 06 July 2025 19:54:51 +0000 (0:00:00.553) 0:00:07.663 *********** 2025-07-06 19:54:58.617036 | orchestrator | changed: [testbed-manager] 2025-07-06 19:54:58.617048 | orchestrator | 2025-07-06 19:54:58.617058 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-07-06 19:54:58.617069 | orchestrator | Sunday 06 July 2025 19:54:52 +0000 (0:00:00.403) 0:00:08.067 *********** 2025-07-06 19:54:58.617080 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:58.617091 | orchestrator | 2025-07-06 19:54:58.617102 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-07-06 19:54:58.617113 | orchestrator | Sunday 06 July 2025 19:54:52 +0000 (0:00:00.505) 0:00:08.572 *********** 2025-07-06 19:54:58.617124 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:58.617135 | orchestrator | 2025-07-06 19:54:58.617146 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-07-06 19:54:58.617157 | orchestrator | Sunday 06 July 2025 19:54:53 +0000 (0:00:00.511) 0:00:09.083 *********** 2025-07-06 19:54:58.617168 | orchestrator | ok: [testbed-manager] 2025-07-06 19:54:58.617179 | orchestrator | 2025-07-06 19:54:58.617210 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-07-06 19:54:58.617222 | orchestrator | Sunday 06 July 2025 19:54:53 +0000 (0:00:00.399) 0:00:09.483 *********** 2025-07-06 19:54:58.617258 | orchestrator | changed: [testbed-manager] 2025-07-06 19:54:58.617273 | orchestrator | 2025-07-06 19:54:58.617285 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-07-06 19:54:58.617297 | orchestrator | Sunday 06 July 2025 19:54:54 +0000 (0:00:01.201) 0:00:10.685 *********** 2025-07-06 19:54:58.617310 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-06 19:54:58.617323 | orchestrator | changed: [testbed-manager] 2025-07-06 19:54:58.617335 | orchestrator | 2025-07-06 19:54:58.617348 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-07-06 19:54:58.617360 | orchestrator | Sunday 06 July 2025 19:54:55 +0000 (0:00:00.898) 0:00:11.584 *********** 2025-07-06 19:54:58.617373 | orchestrator | changed: [testbed-manager] 2025-07-06 19:54:58.617385 | orchestrator | 2025-07-06 19:54:58.617397 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-07-06 19:54:58.617410 | orchestrator | Sunday 06 July 2025 19:54:57 +0000 (0:00:01.694) 0:00:13.279 *********** 2025-07-06 19:54:58.617422 | orchestrator | changed: [testbed-manager] 2025-07-06 19:54:58.617434 | orchestrator | 2025-07-06 19:54:58.617447 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:54:58.617459 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:54:58.617473 | orchestrator | 2025-07-06 19:54:58.617486 | orchestrator | 2025-07-06 19:54:58.617499 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:54:58.617511 | orchestrator | Sunday 06 July 2025 19:54:58 +0000 (0:00:00.922) 0:00:14.201 *********** 2025-07-06 19:54:58.617523 | orchestrator | =============================================================================== 2025-07-06 19:54:58.617536 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.78s 2025-07-06 19:54:58.617548 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-07-06 19:54:58.617559 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2025-07-06 19:54:58.617570 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.16s 2025-07-06 19:54:58.617580 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2025-07-06 19:54:58.617591 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.90s 2025-07-06 19:54:58.617602 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.55s 2025-07-06 19:54:58.617612 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.51s 2025-07-06 19:54:58.617623 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.51s 2025-07-06 19:54:58.617633 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.40s 2025-07-06 19:54:58.617644 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-07-06 19:54:58.883867 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-07-06 19:54:58.925369 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-07-06 19:54:58.925469 | orchestrator | Dload Upload Total Spent Left Speed 2025-07-06 19:54:59.010214 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 176 0 --:--:-- --:--:-- --:--:-- 178 2025-07-06 19:54:59.025331 | orchestrator | + osism apply --environment custom workarounds 2025-07-06 19:55:00.823474 | orchestrator | 2025-07-06 19:55:00 | INFO  | Trying to run play workarounds in environment custom 2025-07-06 19:55:10.931771 | orchestrator | 2025-07-06 19:55:10 | INFO  | Task f67d7739-0707-4474-9f2f-f980c6da48e8 (workarounds) was prepared for execution. 2025-07-06 19:55:10.931901 | orchestrator | 2025-07-06 19:55:10 | INFO  | It takes a moment until task f67d7739-0707-4474-9f2f-f980c6da48e8 (workarounds) has been started and output is visible here. 2025-07-06 19:55:34.808733 | orchestrator | 2025-07-06 19:55:34.808855 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 19:55:34.808871 | orchestrator | 2025-07-06 19:55:34.808884 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-07-06 19:55:34.808896 | orchestrator | Sunday 06 July 2025 19:55:14 +0000 (0:00:00.141) 0:00:00.141 *********** 2025-07-06 19:55:34.808908 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-07-06 19:55:34.808919 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-07-06 19:55:34.808930 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-07-06 19:55:34.808941 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-07-06 19:55:34.808952 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-07-06 19:55:34.808963 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-07-06 19:55:34.808974 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-07-06 19:55:34.808985 | orchestrator | 2025-07-06 19:55:34.808996 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-07-06 19:55:34.809096 | orchestrator | 2025-07-06 19:55:34.809109 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-06 19:55:34.809120 | orchestrator | Sunday 06 July 2025 19:55:15 +0000 (0:00:00.594) 0:00:00.736 *********** 2025-07-06 19:55:34.809148 | orchestrator | ok: [testbed-manager] 2025-07-06 19:55:34.809161 | orchestrator | 2025-07-06 19:55:34.809172 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-07-06 19:55:34.809183 | orchestrator | 2025-07-06 19:55:34.809194 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-06 19:55:34.809205 | orchestrator | Sunday 06 July 2025 19:55:17 +0000 (0:00:01.967) 0:00:02.703 *********** 2025-07-06 19:55:34.809216 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:55:34.809227 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:55:34.809238 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:55:34.809251 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:55:34.809262 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:55:34.809275 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:55:34.809288 | orchestrator | 2025-07-06 19:55:34.809300 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-07-06 19:55:34.809313 | orchestrator | 2025-07-06 19:55:34.809326 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-07-06 19:55:34.809338 | orchestrator | Sunday 06 July 2025 19:55:19 +0000 (0:00:01.825) 0:00:04.529 *********** 2025-07-06 19:55:34.809351 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:55:34.809365 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:55:34.809377 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:55:34.809390 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:55:34.809403 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:55:34.809415 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-06 19:55:34.809427 | orchestrator | 2025-07-06 19:55:34.809440 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-07-06 19:55:34.809453 | orchestrator | Sunday 06 July 2025 19:55:20 +0000 (0:00:01.494) 0:00:06.023 *********** 2025-07-06 19:55:34.809466 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:55:34.809478 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:55:34.809490 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:55:34.809503 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:55:34.809539 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:55:34.809552 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:55:34.809565 | orchestrator | 2025-07-06 19:55:34.809578 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-07-06 19:55:34.809590 | orchestrator | Sunday 06 July 2025 19:55:24 +0000 (0:00:03.774) 0:00:09.798 *********** 2025-07-06 19:55:34.809603 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:55:34.809614 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:55:34.809624 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:55:34.809635 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:55:34.809645 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:55:34.809656 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:55:34.809666 | orchestrator | 2025-07-06 19:55:34.809677 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-07-06 19:55:34.809688 | orchestrator | 2025-07-06 19:55:34.809699 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-07-06 19:55:34.809709 | orchestrator | Sunday 06 July 2025 19:55:25 +0000 (0:00:00.710) 0:00:10.508 *********** 2025-07-06 19:55:34.809720 | orchestrator | changed: [testbed-manager] 2025-07-06 19:55:34.809731 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:55:34.809742 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:55:34.809752 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:55:34.809763 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:55:34.809773 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:55:34.809784 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:55:34.809794 | orchestrator | 2025-07-06 19:55:34.809805 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-07-06 19:55:34.809816 | orchestrator | Sunday 06 July 2025 19:55:26 +0000 (0:00:01.715) 0:00:12.224 *********** 2025-07-06 19:55:34.809826 | orchestrator | changed: [testbed-manager] 2025-07-06 19:55:34.809837 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:55:34.809847 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:55:34.809858 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:55:34.809869 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:55:34.809879 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:55:34.809907 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:55:34.809918 | orchestrator | 2025-07-06 19:55:34.809929 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-07-06 19:55:34.809940 | orchestrator | Sunday 06 July 2025 19:55:28 +0000 (0:00:01.641) 0:00:13.866 *********** 2025-07-06 19:55:34.809951 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:55:34.809962 | orchestrator | ok: [testbed-manager] 2025-07-06 19:55:34.809972 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:55:34.809983 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:55:34.809994 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:55:34.810081 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:55:34.810105 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:55:34.810124 | orchestrator | 2025-07-06 19:55:34.810141 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-07-06 19:55:34.810160 | orchestrator | Sunday 06 July 2025 19:55:29 +0000 (0:00:01.472) 0:00:15.339 *********** 2025-07-06 19:55:34.810172 | orchestrator | changed: [testbed-manager] 2025-07-06 19:55:34.810183 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:55:34.810194 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:55:34.810205 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:55:34.810216 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:55:34.810226 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:55:34.810237 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:55:34.810247 | orchestrator | 2025-07-06 19:55:34.810258 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-07-06 19:55:34.810269 | orchestrator | Sunday 06 July 2025 19:55:31 +0000 (0:00:01.722) 0:00:17.061 *********** 2025-07-06 19:55:34.810279 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:55:34.810297 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:55:34.810318 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:55:34.810328 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:55:34.810339 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:55:34.810349 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:55:34.810360 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:55:34.810370 | orchestrator | 2025-07-06 19:55:34.810381 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-07-06 19:55:34.810392 | orchestrator | 2025-07-06 19:55:34.810402 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-07-06 19:55:34.810413 | orchestrator | Sunday 06 July 2025 19:55:32 +0000 (0:00:00.533) 0:00:17.594 *********** 2025-07-06 19:55:34.810424 | orchestrator | ok: [testbed-manager] 2025-07-06 19:55:34.810435 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:55:34.810445 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:55:34.810456 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:55:34.810467 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:55:34.810477 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:55:34.810487 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:55:34.810498 | orchestrator | 2025-07-06 19:55:34.810509 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:55:34.810521 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:55:34.810533 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:34.810544 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:34.810555 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:34.810566 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:34.810577 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:34.810588 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:34.810598 | orchestrator | 2025-07-06 19:55:34.810609 | orchestrator | 2025-07-06 19:55:34.810620 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:55:34.810631 | orchestrator | Sunday 06 July 2025 19:55:34 +0000 (0:00:02.551) 0:00:20.146 *********** 2025-07-06 19:55:34.810642 | orchestrator | =============================================================================== 2025-07-06 19:55:34.810652 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.77s 2025-07-06 19:55:34.810663 | orchestrator | Install python3-docker -------------------------------------------------- 2.55s 2025-07-06 19:55:34.810674 | orchestrator | Apply netplan configuration --------------------------------------------- 1.97s 2025-07-06 19:55:34.810685 | orchestrator | Apply netplan configuration --------------------------------------------- 1.83s 2025-07-06 19:55:34.810695 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.72s 2025-07-06 19:55:34.810706 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2025-07-06 19:55:34.810717 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.64s 2025-07-06 19:55:34.810727 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.49s 2025-07-06 19:55:34.810738 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.47s 2025-07-06 19:55:34.810749 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.71s 2025-07-06 19:55:34.810768 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.59s 2025-07-06 19:55:34.810788 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.53s 2025-07-06 19:55:35.196618 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-07-06 19:55:46.864360 | orchestrator | 2025-07-06 19:55:46 | INFO  | Task d72d80a2-9de2-43fb-ba1f-27a4762ede36 (reboot) was prepared for execution. 2025-07-06 19:55:46.864464 | orchestrator | 2025-07-06 19:55:46 | INFO  | It takes a moment until task d72d80a2-9de2-43fb-ba1f-27a4762ede36 (reboot) has been started and output is visible here. 2025-07-06 19:55:56.364791 | orchestrator | 2025-07-06 19:55:56.364929 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:55:56.364948 | orchestrator | 2025-07-06 19:55:56.364961 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:55:56.364972 | orchestrator | Sunday 06 July 2025 19:55:50 +0000 (0:00:00.190) 0:00:00.190 *********** 2025-07-06 19:55:56.364983 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:55:56.364996 | orchestrator | 2025-07-06 19:55:56.365006 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:55:56.365050 | orchestrator | Sunday 06 July 2025 19:55:50 +0000 (0:00:00.106) 0:00:00.297 *********** 2025-07-06 19:55:56.365065 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:55:56.365076 | orchestrator | 2025-07-06 19:55:56.365087 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:55:56.365115 | orchestrator | Sunday 06 July 2025 19:55:51 +0000 (0:00:00.830) 0:00:01.127 *********** 2025-07-06 19:55:56.365127 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:55:56.365137 | orchestrator | 2025-07-06 19:55:56.365148 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:55:56.365158 | orchestrator | 2025-07-06 19:55:56.365169 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:55:56.365180 | orchestrator | Sunday 06 July 2025 19:55:51 +0000 (0:00:00.102) 0:00:01.229 *********** 2025-07-06 19:55:56.365190 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:55:56.365200 | orchestrator | 2025-07-06 19:55:56.365211 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:55:56.365222 | orchestrator | Sunday 06 July 2025 19:55:51 +0000 (0:00:00.096) 0:00:01.326 *********** 2025-07-06 19:55:56.365232 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:55:56.365243 | orchestrator | 2025-07-06 19:55:56.365253 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:55:56.365264 | orchestrator | Sunday 06 July 2025 19:55:52 +0000 (0:00:00.600) 0:00:01.926 *********** 2025-07-06 19:55:56.365275 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:55:56.365286 | orchestrator | 2025-07-06 19:55:56.365298 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:55:56.365310 | orchestrator | 2025-07-06 19:55:56.365324 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:55:56.365344 | orchestrator | Sunday 06 July 2025 19:55:52 +0000 (0:00:00.095) 0:00:02.022 *********** 2025-07-06 19:55:56.365362 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:55:56.365380 | orchestrator | 2025-07-06 19:55:56.365399 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:55:56.365418 | orchestrator | Sunday 06 July 2025 19:55:52 +0000 (0:00:00.158) 0:00:02.181 *********** 2025-07-06 19:55:56.365437 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:55:56.365454 | orchestrator | 2025-07-06 19:55:56.365471 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:55:56.365489 | orchestrator | Sunday 06 July 2025 19:55:53 +0000 (0:00:00.617) 0:00:02.798 *********** 2025-07-06 19:55:56.365507 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:55:56.365527 | orchestrator | 2025-07-06 19:55:56.365545 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:55:56.365595 | orchestrator | 2025-07-06 19:55:56.365613 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:55:56.365628 | orchestrator | Sunday 06 July 2025 19:55:53 +0000 (0:00:00.120) 0:00:02.919 *********** 2025-07-06 19:55:56.365642 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:55:56.365659 | orchestrator | 2025-07-06 19:55:56.365677 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:55:56.365694 | orchestrator | Sunday 06 July 2025 19:55:53 +0000 (0:00:00.098) 0:00:03.017 *********** 2025-07-06 19:55:56.365711 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:55:56.365766 | orchestrator | 2025-07-06 19:55:56.365788 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:55:56.365806 | orchestrator | Sunday 06 July 2025 19:55:54 +0000 (0:00:00.635) 0:00:03.653 *********** 2025-07-06 19:55:56.365825 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:55:56.365837 | orchestrator | 2025-07-06 19:55:56.365848 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:55:56.365859 | orchestrator | 2025-07-06 19:55:56.365870 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:55:56.365881 | orchestrator | Sunday 06 July 2025 19:55:54 +0000 (0:00:00.115) 0:00:03.768 *********** 2025-07-06 19:55:56.365891 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:55:56.365902 | orchestrator | 2025-07-06 19:55:56.365913 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:55:56.365923 | orchestrator | Sunday 06 July 2025 19:55:54 +0000 (0:00:00.112) 0:00:03.881 *********** 2025-07-06 19:55:56.365934 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:55:56.365945 | orchestrator | 2025-07-06 19:55:56.365955 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:55:56.365966 | orchestrator | Sunday 06 July 2025 19:55:55 +0000 (0:00:00.695) 0:00:04.576 *********** 2025-07-06 19:55:56.365976 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:55:56.365987 | orchestrator | 2025-07-06 19:55:56.365998 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-06 19:55:56.366008 | orchestrator | 2025-07-06 19:55:56.366123 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-06 19:55:56.366135 | orchestrator | Sunday 06 July 2025 19:55:55 +0000 (0:00:00.123) 0:00:04.700 *********** 2025-07-06 19:55:56.366146 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:55:56.366157 | orchestrator | 2025-07-06 19:55:56.366168 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-06 19:55:56.366179 | orchestrator | Sunday 06 July 2025 19:55:55 +0000 (0:00:00.102) 0:00:04.802 *********** 2025-07-06 19:55:56.366189 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:55:56.366200 | orchestrator | 2025-07-06 19:55:56.366211 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-06 19:55:56.366222 | orchestrator | Sunday 06 July 2025 19:55:55 +0000 (0:00:00.643) 0:00:05.445 *********** 2025-07-06 19:55:56.366256 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:55:56.366267 | orchestrator | 2025-07-06 19:55:56.366278 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:55:56.366291 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:56.366304 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:56.366315 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:56.366326 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:56.366337 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:56.366361 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 19:55:56.366372 | orchestrator | 2025-07-06 19:55:56.366383 | orchestrator | 2025-07-06 19:55:56.366394 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:55:56.366405 | orchestrator | Sunday 06 July 2025 19:55:56 +0000 (0:00:00.042) 0:00:05.488 *********** 2025-07-06 19:55:56.366415 | orchestrator | =============================================================================== 2025-07-06 19:55:56.366426 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.02s 2025-07-06 19:55:56.366437 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.68s 2025-07-06 19:55:56.366448 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.60s 2025-07-06 19:55:56.634449 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-07-06 19:56:08.655289 | orchestrator | 2025-07-06 19:56:08 | INFO  | Task b60a43db-f7d4-4852-a030-3be745156393 (wait-for-connection) was prepared for execution. 2025-07-06 19:56:08.655405 | orchestrator | 2025-07-06 19:56:08 | INFO  | It takes a moment until task b60a43db-f7d4-4852-a030-3be745156393 (wait-for-connection) has been started and output is visible here. 2025-07-06 19:56:24.442677 | orchestrator | 2025-07-06 19:56:24.442799 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-07-06 19:56:24.442815 | orchestrator | 2025-07-06 19:56:24.442828 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-07-06 19:56:24.442839 | orchestrator | Sunday 06 July 2025 19:56:12 +0000 (0:00:00.236) 0:00:00.237 *********** 2025-07-06 19:56:24.442850 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:56:24.442867 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:56:24.442879 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:56:24.442890 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:56:24.442901 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:56:24.442911 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:56:24.442922 | orchestrator | 2025-07-06 19:56:24.442933 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:56:24.442944 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:56:24.442957 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:56:24.442968 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:56:24.443000 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:56:24.443012 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:56:24.443024 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:56:24.443125 | orchestrator | 2025-07-06 19:56:24.443146 | orchestrator | 2025-07-06 19:56:24.443158 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:56:24.443169 | orchestrator | Sunday 06 July 2025 19:56:24 +0000 (0:00:11.504) 0:00:11.741 *********** 2025-07-06 19:56:24.443180 | orchestrator | =============================================================================== 2025-07-06 19:56:24.443191 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.51s 2025-07-06 19:56:24.705879 | orchestrator | + osism apply hddtemp 2025-07-06 19:56:36.661845 | orchestrator | 2025-07-06 19:56:36 | INFO  | Task 2eb0eafb-2df9-4383-8d9e-48fcbc302011 (hddtemp) was prepared for execution. 2025-07-06 19:56:36.661988 | orchestrator | 2025-07-06 19:56:36 | INFO  | It takes a moment until task 2eb0eafb-2df9-4383-8d9e-48fcbc302011 (hddtemp) has been started and output is visible here. 2025-07-06 19:57:03.572834 | orchestrator | 2025-07-06 19:57:03.572951 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-07-06 19:57:03.572969 | orchestrator | 2025-07-06 19:57:03.572981 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-07-06 19:57:03.572993 | orchestrator | Sunday 06 July 2025 19:56:40 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-07-06 19:57:03.573004 | orchestrator | ok: [testbed-manager] 2025-07-06 19:57:03.573016 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:57:03.573028 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:57:03.573039 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:57:03.573050 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:57:03.573102 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:57:03.573115 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:57:03.573127 | orchestrator | 2025-07-06 19:57:03.573138 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-07-06 19:57:03.573149 | orchestrator | Sunday 06 July 2025 19:56:41 +0000 (0:00:00.686) 0:00:00.945 *********** 2025-07-06 19:57:03.573179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:57:03.573194 | orchestrator | 2025-07-06 19:57:03.573206 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-07-06 19:57:03.573217 | orchestrator | Sunday 06 July 2025 19:56:42 +0000 (0:00:01.208) 0:00:02.153 *********** 2025-07-06 19:57:03.573228 | orchestrator | ok: [testbed-manager] 2025-07-06 19:57:03.573239 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:57:03.573250 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:57:03.573261 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:57:03.573272 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:57:03.573282 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:57:03.573293 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:57:03.573304 | orchestrator | 2025-07-06 19:57:03.573315 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-07-06 19:57:03.573326 | orchestrator | Sunday 06 July 2025 19:56:44 +0000 (0:00:01.942) 0:00:04.096 *********** 2025-07-06 19:57:03.573337 | orchestrator | changed: [testbed-manager] 2025-07-06 19:57:03.573349 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:57:03.573361 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:57:03.573372 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:57:03.573383 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:57:03.573393 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:57:03.573404 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:57:03.573415 | orchestrator | 2025-07-06 19:57:03.573427 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-07-06 19:57:03.573438 | orchestrator | Sunday 06 July 2025 19:56:45 +0000 (0:00:01.130) 0:00:05.226 *********** 2025-07-06 19:57:03.573449 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:57:03.573460 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:57:03.573471 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:57:03.573482 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:57:03.573492 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:57:03.573503 | orchestrator | ok: [testbed-manager] 2025-07-06 19:57:03.573514 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:57:03.573525 | orchestrator | 2025-07-06 19:57:03.573536 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-07-06 19:57:03.573547 | orchestrator | Sunday 06 July 2025 19:56:46 +0000 (0:00:01.110) 0:00:06.336 *********** 2025-07-06 19:57:03.573558 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:57:03.573569 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:57:03.573602 | orchestrator | changed: [testbed-manager] 2025-07-06 19:57:03.573613 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:57:03.573624 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:57:03.573635 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:57:03.573646 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:57:03.573656 | orchestrator | 2025-07-06 19:57:03.573667 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-07-06 19:57:03.573678 | orchestrator | Sunday 06 July 2025 19:56:47 +0000 (0:00:00.800) 0:00:07.137 *********** 2025-07-06 19:57:03.573689 | orchestrator | changed: [testbed-manager] 2025-07-06 19:57:03.573699 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:57:03.573710 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:57:03.573721 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:57:03.573732 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:57:03.573742 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:57:03.573753 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:57:03.573763 | orchestrator | 2025-07-06 19:57:03.573775 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-07-06 19:57:03.573785 | orchestrator | Sunday 06 July 2025 19:56:59 +0000 (0:00:12.449) 0:00:19.586 *********** 2025-07-06 19:57:03.573797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 19:57:03.573808 | orchestrator | 2025-07-06 19:57:03.573819 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-07-06 19:57:03.573829 | orchestrator | Sunday 06 July 2025 19:57:01 +0000 (0:00:01.393) 0:00:20.980 *********** 2025-07-06 19:57:03.573840 | orchestrator | changed: [testbed-manager] 2025-07-06 19:57:03.573851 | orchestrator | changed: [testbed-node-1] 2025-07-06 19:57:03.573862 | orchestrator | changed: [testbed-node-0] 2025-07-06 19:57:03.573872 | orchestrator | changed: [testbed-node-3] 2025-07-06 19:57:03.573883 | orchestrator | changed: [testbed-node-2] 2025-07-06 19:57:03.573893 | orchestrator | changed: [testbed-node-4] 2025-07-06 19:57:03.573904 | orchestrator | changed: [testbed-node-5] 2025-07-06 19:57:03.573915 | orchestrator | 2025-07-06 19:57:03.573926 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:57:03.573937 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 19:57:03.573967 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:57:03.573979 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:57:03.573991 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:57:03.574002 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:57:03.574013 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:57:03.574133 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:57:03.574187 | orchestrator | 2025-07-06 19:57:03.574202 | orchestrator | 2025-07-06 19:57:03.574213 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:57:03.574225 | orchestrator | Sunday 06 July 2025 19:57:03 +0000 (0:00:01.836) 0:00:22.816 *********** 2025-07-06 19:57:03.574236 | orchestrator | =============================================================================== 2025-07-06 19:57:03.574258 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.45s 2025-07-06 19:57:03.574269 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2025-07-06 19:57:03.574280 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2025-07-06 19:57:03.574291 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.39s 2025-07-06 19:57:03.574302 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.21s 2025-07-06 19:57:03.574313 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.13s 2025-07-06 19:57:03.574324 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.11s 2025-07-06 19:57:03.574335 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.80s 2025-07-06 19:57:03.574346 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.69s 2025-07-06 19:57:03.846666 | orchestrator | ++ semver latest 7.1.1 2025-07-06 19:57:03.901572 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-06 19:57:03.901664 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-06 19:57:03.901679 | orchestrator | + sudo systemctl restart manager.service 2025-07-06 19:58:01.225985 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-06 19:58:01.226212 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-06 19:58:01.226231 | orchestrator | + local max_attempts=60 2025-07-06 19:58:01.226249 | orchestrator | + local name=ceph-ansible 2025-07-06 19:58:01.226266 | orchestrator | + local attempt_num=1 2025-07-06 19:58:01.226294 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:01.259271 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:01.259358 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:01.259370 | orchestrator | + sleep 5 2025-07-06 19:58:06.267410 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:06.289416 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:06.289526 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:06.289550 | orchestrator | + sleep 5 2025-07-06 19:58:11.292912 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:11.323811 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:11.323902 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:11.323911 | orchestrator | + sleep 5 2025-07-06 19:58:16.329376 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:16.367844 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:16.367950 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:16.367966 | orchestrator | + sleep 5 2025-07-06 19:58:21.372902 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:21.412565 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:21.412668 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:21.412683 | orchestrator | + sleep 5 2025-07-06 19:58:26.418773 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:26.459426 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:26.459519 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:26.459532 | orchestrator | + sleep 5 2025-07-06 19:58:31.465414 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:31.505719 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:31.505803 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:31.505818 | orchestrator | + sleep 5 2025-07-06 19:58:36.510004 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:36.541451 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:36.541569 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:36.541589 | orchestrator | + sleep 5 2025-07-06 19:58:41.546829 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:41.585307 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:41.585404 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:41.585422 | orchestrator | + sleep 5 2025-07-06 19:58:46.588396 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:46.623530 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:46.623625 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:46.623640 | orchestrator | + sleep 5 2025-07-06 19:58:51.627331 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:51.664233 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:51.664329 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:51.664344 | orchestrator | + sleep 5 2025-07-06 19:58:56.668929 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:58:56.711059 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:58:56.711155 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:58:56.711170 | orchestrator | + sleep 5 2025-07-06 19:59:01.716788 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:59:01.753019 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-06 19:59:01.753128 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-06 19:59:01.753145 | orchestrator | + sleep 5 2025-07-06 19:59:06.758311 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-06 19:59:06.803316 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:59:06.803408 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-06 19:59:06.803422 | orchestrator | + local max_attempts=60 2025-07-06 19:59:06.803434 | orchestrator | + local name=kolla-ansible 2025-07-06 19:59:06.803446 | orchestrator | + local attempt_num=1 2025-07-06 19:59:06.803457 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-06 19:59:06.836046 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:59:06.836128 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-06 19:59:06.836142 | orchestrator | + local max_attempts=60 2025-07-06 19:59:06.836154 | orchestrator | + local name=osism-ansible 2025-07-06 19:59:06.836165 | orchestrator | + local attempt_num=1 2025-07-06 19:59:06.836176 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-06 19:59:06.866906 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-06 19:59:06.866984 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-06 19:59:06.867000 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-06 19:59:07.018357 | orchestrator | ARA in ceph-ansible already disabled. 2025-07-06 19:59:07.151782 | orchestrator | ARA in kolla-ansible already disabled. 2025-07-06 19:59:07.325309 | orchestrator | ARA in osism-ansible already disabled. 2025-07-06 19:59:07.467267 | orchestrator | + osism apply gather-facts 2025-07-06 19:59:19.411903 | orchestrator | 2025-07-06 19:59:19 | INFO  | Task da9368aa-c251-45d1-9481-91319e246cb8 (gather-facts) was prepared for execution. 2025-07-06 19:59:19.412019 | orchestrator | 2025-07-06 19:59:19 | INFO  | It takes a moment until task da9368aa-c251-45d1-9481-91319e246cb8 (gather-facts) has been started and output is visible here. 2025-07-06 19:59:32.824782 | orchestrator | 2025-07-06 19:59:32.824914 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 19:59:32.824939 | orchestrator | 2025-07-06 19:59:32.824959 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 19:59:32.824979 | orchestrator | Sunday 06 July 2025 19:59:23 +0000 (0:00:00.251) 0:00:00.251 *********** 2025-07-06 19:59:32.824998 | orchestrator | ok: [testbed-node-0] 2025-07-06 19:59:32.825021 | orchestrator | ok: [testbed-node-1] 2025-07-06 19:59:32.825032 | orchestrator | ok: [testbed-node-2] 2025-07-06 19:59:32.825044 | orchestrator | ok: [testbed-manager] 2025-07-06 19:59:32.825054 | orchestrator | ok: [testbed-node-3] 2025-07-06 19:59:32.825065 | orchestrator | ok: [testbed-node-5] 2025-07-06 19:59:32.825075 | orchestrator | ok: [testbed-node-4] 2025-07-06 19:59:32.825086 | orchestrator | 2025-07-06 19:59:32.825097 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-06 19:59:32.825108 | orchestrator | 2025-07-06 19:59:32.825119 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-06 19:59:32.825130 | orchestrator | Sunday 06 July 2025 19:59:32 +0000 (0:00:08.605) 0:00:08.857 *********** 2025-07-06 19:59:32.825141 | orchestrator | skipping: [testbed-manager] 2025-07-06 19:59:32.825152 | orchestrator | skipping: [testbed-node-0] 2025-07-06 19:59:32.825190 | orchestrator | skipping: [testbed-node-1] 2025-07-06 19:59:32.825202 | orchestrator | skipping: [testbed-node-2] 2025-07-06 19:59:32.825213 | orchestrator | skipping: [testbed-node-3] 2025-07-06 19:59:32.825224 | orchestrator | skipping: [testbed-node-4] 2025-07-06 19:59:32.825234 | orchestrator | skipping: [testbed-node-5] 2025-07-06 19:59:32.825245 | orchestrator | 2025-07-06 19:59:32.825255 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 19:59:32.825266 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:59:32.825279 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:59:32.825290 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:59:32.825301 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:59:32.825311 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:59:32.825322 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:59:32.825333 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 19:59:32.825343 | orchestrator | 2025-07-06 19:59:32.825354 | orchestrator | 2025-07-06 19:59:32.825365 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 19:59:32.825376 | orchestrator | Sunday 06 July 2025 19:59:32 +0000 (0:00:00.474) 0:00:09.331 *********** 2025-07-06 19:59:32.825386 | orchestrator | =============================================================================== 2025-07-06 19:59:32.825397 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.61s 2025-07-06 19:59:32.825408 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2025-07-06 19:59:33.096123 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-07-06 19:59:33.106165 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-07-06 19:59:33.116462 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-07-06 19:59:33.125971 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-07-06 19:59:33.135824 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-07-06 19:59:33.145791 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-07-06 19:59:33.155801 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-07-06 19:59:33.166208 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-07-06 19:59:33.180703 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-07-06 19:59:33.191036 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-07-06 19:59:33.204688 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-07-06 19:59:33.214250 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-07-06 19:59:33.225569 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-07-06 19:59:33.234907 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-07-06 19:59:33.244493 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-07-06 19:59:33.255088 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-07-06 19:59:33.271070 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-07-06 19:59:33.286595 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-07-06 19:59:33.297480 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-07-06 19:59:33.310382 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-07-06 19:59:33.325492 | orchestrator | + [[ false == \t\r\u\e ]] 2025-07-06 19:59:33.624859 | orchestrator | ok: Runtime: 0:23:17.585423 2025-07-06 19:59:33.724956 | 2025-07-06 19:59:33.725088 | TASK [Deploy services] 2025-07-06 19:59:34.256701 | orchestrator | skipping: Conditional result was False 2025-07-06 19:59:34.266738 | 2025-07-06 19:59:34.267052 | TASK [Deploy in a nutshell] 2025-07-06 19:59:35.020051 | orchestrator | 2025-07-06 19:59:35.020228 | orchestrator | # PULL IMAGES 2025-07-06 19:59:35.020249 | orchestrator | 2025-07-06 19:59:35.020260 | orchestrator | + set -e 2025-07-06 19:59:35.020274 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 19:59:35.020291 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 19:59:35.020303 | orchestrator | ++ INTERACTIVE=false 2025-07-06 19:59:35.020341 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 19:59:35.020360 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 19:59:35.020371 | orchestrator | + source /opt/manager-vars.sh 2025-07-06 19:59:35.020380 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-06 19:59:35.020395 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-06 19:59:35.020404 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-06 19:59:35.020418 | orchestrator | ++ CEPH_VERSION=reef 2025-07-06 19:59:35.020427 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-06 19:59:35.020442 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-06 19:59:35.020450 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-06 19:59:35.020462 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-06 19:59:35.020471 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-06 19:59:35.020481 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-06 19:59:35.020489 | orchestrator | ++ export ARA=false 2025-07-06 19:59:35.020498 | orchestrator | ++ ARA=false 2025-07-06 19:59:35.020507 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-06 19:59:35.020556 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-06 19:59:35.020565 | orchestrator | ++ export TEMPEST=false 2025-07-06 19:59:35.020574 | orchestrator | ++ TEMPEST=false 2025-07-06 19:59:35.020582 | orchestrator | ++ export IS_ZUUL=true 2025-07-06 19:59:35.020591 | orchestrator | ++ IS_ZUUL=true 2025-07-06 19:59:35.020600 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 19:59:35.020609 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 19:59:35.020617 | orchestrator | ++ export EXTERNAL_API=false 2025-07-06 19:59:35.020626 | orchestrator | ++ EXTERNAL_API=false 2025-07-06 19:59:35.020634 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-06 19:59:35.020643 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-06 19:59:35.020652 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-06 19:59:35.020660 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-06 19:59:35.020669 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-06 19:59:35.020678 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-06 19:59:35.020686 | orchestrator | + echo 2025-07-06 19:59:35.020701 | orchestrator | + echo '# PULL IMAGES' 2025-07-06 19:59:35.020710 | orchestrator | + echo 2025-07-06 19:59:35.020731 | orchestrator | ++ semver latest 7.0.0 2025-07-06 19:59:35.083039 | orchestrator | + [[ -1 -ge 0 ]] 2025-07-06 19:59:35.083141 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-06 19:59:35.083156 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-07-06 19:59:36.855884 | orchestrator | 2025-07-06 19:59:36 | INFO  | Trying to run play pull-images in environment custom 2025-07-06 19:59:47.077146 | orchestrator | 2025-07-06 19:59:47 | INFO  | Task d8495603-7626-4337-8bad-1f55b0105b7a (pull-images) was prepared for execution. 2025-07-06 19:59:47.077278 | orchestrator | 2025-07-06 19:59:47 | INFO  | It takes a moment until task d8495603-7626-4337-8bad-1f55b0105b7a (pull-images) has been started and output is visible here. 2025-07-06 20:01:49.702899 | orchestrator | 2025-07-06 20:01:49.703044 | orchestrator | PLAY [Pull images] ************************************************************* 2025-07-06 20:01:49.703071 | orchestrator | 2025-07-06 20:01:49.703092 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-07-06 20:01:49.703127 | orchestrator | Sunday 06 July 2025 19:59:50 +0000 (0:00:00.162) 0:00:00.162 *********** 2025-07-06 20:01:49.703148 | orchestrator | changed: [testbed-manager] 2025-07-06 20:01:49.703168 | orchestrator | 2025-07-06 20:01:49.703186 | orchestrator | TASK [Pull other images] ******************************************************* 2025-07-06 20:01:49.703204 | orchestrator | Sunday 06 July 2025 20:00:56 +0000 (0:01:05.497) 0:01:05.659 *********** 2025-07-06 20:01:49.703224 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-07-06 20:01:49.703248 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-07-06 20:01:49.703268 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-07-06 20:01:49.703286 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-07-06 20:01:49.703351 | orchestrator | changed: [testbed-manager] => (item=common) 2025-07-06 20:01:49.703375 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-07-06 20:01:49.703450 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-07-06 20:01:49.703476 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-07-06 20:01:49.703495 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-07-06 20:01:49.703514 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-07-06 20:01:49.703526 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-07-06 20:01:49.703537 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-07-06 20:01:49.703547 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-07-06 20:01:49.703558 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-07-06 20:01:49.703569 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-07-06 20:01:49.703580 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-07-06 20:01:49.703590 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-07-06 20:01:49.703601 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-07-06 20:01:49.703612 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-07-06 20:01:49.703622 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-07-06 20:01:49.703633 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-07-06 20:01:49.703643 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-07-06 20:01:49.703654 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-07-06 20:01:49.703664 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-07-06 20:01:49.703675 | orchestrator | 2025-07-06 20:01:49.703686 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:01:49.703697 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:01:49.703710 | orchestrator | 2025-07-06 20:01:49.703721 | orchestrator | 2025-07-06 20:01:49.703732 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:01:49.703742 | orchestrator | Sunday 06 July 2025 20:01:49 +0000 (0:00:52.932) 0:01:58.592 *********** 2025-07-06 20:01:49.703753 | orchestrator | =============================================================================== 2025-07-06 20:01:49.703764 | orchestrator | Pull keystone image ---------------------------------------------------- 65.50s 2025-07-06 20:01:49.703775 | orchestrator | Pull other images ------------------------------------------------------ 52.93s 2025-07-06 20:01:51.594452 | orchestrator | 2025-07-06 20:01:51 | INFO  | Trying to run play wipe-partitions in environment custom 2025-07-06 20:02:01.710229 | orchestrator | 2025-07-06 20:02:01 | INFO  | Task 0e2629ab-ef81-4ec6-8b0e-8faf5f4e5805 (wipe-partitions) was prepared for execution. 2025-07-06 20:02:01.710405 | orchestrator | 2025-07-06 20:02:01 | INFO  | It takes a moment until task 0e2629ab-ef81-4ec6-8b0e-8faf5f4e5805 (wipe-partitions) has been started and output is visible here. 2025-07-06 20:02:13.348357 | orchestrator | 2025-07-06 20:02:13.348452 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-07-06 20:02:13.348465 | orchestrator | 2025-07-06 20:02:13.348474 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-07-06 20:02:13.348490 | orchestrator | Sunday 06 July 2025 20:02:05 +0000 (0:00:00.131) 0:00:00.131 *********** 2025-07-06 20:02:13.348498 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:02:13.348506 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:02:13.348513 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:02:13.348521 | orchestrator | 2025-07-06 20:02:13.348528 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-07-06 20:02:13.348536 | orchestrator | Sunday 06 July 2025 20:02:05 +0000 (0:00:00.537) 0:00:00.668 *********** 2025-07-06 20:02:13.348543 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:13.348550 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:02:13.348558 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:02:13.348585 | orchestrator | 2025-07-06 20:02:13.348593 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-07-06 20:02:13.348600 | orchestrator | Sunday 06 July 2025 20:02:06 +0000 (0:00:00.238) 0:00:00.906 *********** 2025-07-06 20:02:13.348608 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:02:13.348617 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:02:13.348624 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:02:13.348631 | orchestrator | 2025-07-06 20:02:13.348638 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-07-06 20:02:13.348645 | orchestrator | Sunday 06 July 2025 20:02:06 +0000 (0:00:00.671) 0:00:01.578 *********** 2025-07-06 20:02:13.348652 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:13.348660 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:02:13.348667 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:02:13.348674 | orchestrator | 2025-07-06 20:02:13.348681 | orchestrator | TASK [Check device availability] *********************************************** 2025-07-06 20:02:13.348688 | orchestrator | Sunday 06 July 2025 20:02:06 +0000 (0:00:00.233) 0:00:01.811 *********** 2025-07-06 20:02:13.348695 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-06 20:02:13.348703 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-06 20:02:13.348710 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-06 20:02:13.348718 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-06 20:02:13.348728 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-06 20:02:13.348736 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-06 20:02:13.348743 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-06 20:02:13.348750 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-06 20:02:13.348757 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-06 20:02:13.348764 | orchestrator | 2025-07-06 20:02:13.348771 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-07-06 20:02:13.348779 | orchestrator | Sunday 06 July 2025 20:02:08 +0000 (0:00:01.156) 0:00:02.968 *********** 2025-07-06 20:02:13.348786 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-07-06 20:02:13.348793 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-07-06 20:02:13.348800 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-07-06 20:02:13.348808 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-07-06 20:02:13.348815 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-07-06 20:02:13.348834 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-07-06 20:02:13.348841 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-07-06 20:02:13.348857 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-07-06 20:02:13.348866 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-07-06 20:02:13.348875 | orchestrator | 2025-07-06 20:02:13.348884 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-07-06 20:02:13.348892 | orchestrator | Sunday 06 July 2025 20:02:09 +0000 (0:00:01.360) 0:00:04.328 *********** 2025-07-06 20:02:13.348901 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-06 20:02:13.348909 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-06 20:02:13.348918 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-06 20:02:13.348926 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-06 20:02:13.348942 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-06 20:02:13.348951 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-06 20:02:13.348960 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-06 20:02:13.348968 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-06 20:02:13.348977 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-06 20:02:13.348985 | orchestrator | 2025-07-06 20:02:13.348994 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-07-06 20:02:13.349003 | orchestrator | Sunday 06 July 2025 20:02:11 +0000 (0:00:02.219) 0:00:06.548 *********** 2025-07-06 20:02:13.349017 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:02:13.349025 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:02:13.349034 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:02:13.349042 | orchestrator | 2025-07-06 20:02:13.349051 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-07-06 20:02:13.349059 | orchestrator | Sunday 06 July 2025 20:02:12 +0000 (0:00:00.627) 0:00:07.175 *********** 2025-07-06 20:02:13.349066 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:02:13.349074 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:02:13.349081 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:02:13.349088 | orchestrator | 2025-07-06 20:02:13.349095 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:02:13.349103 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:02:13.349111 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:02:13.349132 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:02:13.349140 | orchestrator | 2025-07-06 20:02:13.349147 | orchestrator | 2025-07-06 20:02:13.349159 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:02:13.349166 | orchestrator | Sunday 06 July 2025 20:02:12 +0000 (0:00:00.630) 0:00:07.805 *********** 2025-07-06 20:02:13.349173 | orchestrator | =============================================================================== 2025-07-06 20:02:13.349181 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.22s 2025-07-06 20:02:13.349188 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.36s 2025-07-06 20:02:13.349195 | orchestrator | Check device availability ----------------------------------------------- 1.16s 2025-07-06 20:02:13.349202 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.67s 2025-07-06 20:02:13.349209 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-07-06 20:02:13.349216 | orchestrator | Reload udev rules ------------------------------------------------------- 0.63s 2025-07-06 20:02:13.349223 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.54s 2025-07-06 20:02:13.349230 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-07-06 20:02:13.349237 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-07-06 20:02:26.189324 | orchestrator | 2025-07-06 20:02:26 | INFO  | Task 8f0776e7-1d1b-4b02-9817-1d3290b61af4 (facts) was prepared for execution. 2025-07-06 20:02:26.189428 | orchestrator | 2025-07-06 20:02:26 | INFO  | It takes a moment until task 8f0776e7-1d1b-4b02-9817-1d3290b61af4 (facts) has been started and output is visible here. 2025-07-06 20:02:39.060887 | orchestrator | 2025-07-06 20:02:39.061002 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-06 20:02:39.061019 | orchestrator | 2025-07-06 20:02:39.061031 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-06 20:02:39.061043 | orchestrator | Sunday 06 July 2025 20:02:31 +0000 (0:00:00.267) 0:00:00.267 *********** 2025-07-06 20:02:39.061055 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:39.061067 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:02:39.061078 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:02:39.061089 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:02:39.061099 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:02:39.061110 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:02:39.061121 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:02:39.061132 | orchestrator | 2025-07-06 20:02:39.061143 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-06 20:02:39.061182 | orchestrator | Sunday 06 July 2025 20:02:32 +0000 (0:00:01.006) 0:00:01.274 *********** 2025-07-06 20:02:39.061246 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:02:39.061261 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:02:39.061271 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:02:39.061282 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:02:39.061293 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:39.061304 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:02:39.061314 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:02:39.061325 | orchestrator | 2025-07-06 20:02:39.061336 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 20:02:39.061347 | orchestrator | 2025-07-06 20:02:39.061358 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 20:02:39.061369 | orchestrator | Sunday 06 July 2025 20:02:33 +0000 (0:00:01.104) 0:00:02.378 *********** 2025-07-06 20:02:39.061380 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:02:39.061391 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:02:39.061402 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:02:39.061415 | orchestrator | ok: [testbed-manager] 2025-07-06 20:02:39.061426 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:02:39.061437 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:02:39.061448 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:02:39.061458 | orchestrator | 2025-07-06 20:02:39.061469 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-06 20:02:39.061480 | orchestrator | 2025-07-06 20:02:39.061491 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-06 20:02:39.061503 | orchestrator | Sunday 06 July 2025 20:02:38 +0000 (0:00:04.834) 0:00:07.213 *********** 2025-07-06 20:02:39.061514 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:02:39.061525 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:02:39.061536 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:02:39.061547 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:02:39.061557 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:39.061568 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:02:39.061579 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:02:39.061589 | orchestrator | 2025-07-06 20:02:39.061600 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:02:39.061612 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:02:39.061624 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:02:39.061635 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:02:39.061646 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:02:39.061673 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:02:39.061684 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:02:39.061695 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:02:39.061706 | orchestrator | 2025-07-06 20:02:39.061717 | orchestrator | 2025-07-06 20:02:39.061727 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:02:39.061738 | orchestrator | Sunday 06 July 2025 20:02:38 +0000 (0:00:00.578) 0:00:07.791 *********** 2025-07-06 20:02:39.061749 | orchestrator | =============================================================================== 2025-07-06 20:02:39.061760 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.83s 2025-07-06 20:02:39.061781 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2025-07-06 20:02:39.061792 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.01s 2025-07-06 20:02:39.061803 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-07-06 20:02:41.379031 | orchestrator | 2025-07-06 20:02:41 | INFO  | Task abc2ebcd-9369-49f0-8315-fe3c427ce2f5 (ceph-configure-lvm-volumes) was prepared for execution. 2025-07-06 20:02:41.379128 | orchestrator | 2025-07-06 20:02:41 | INFO  | It takes a moment until task abc2ebcd-9369-49f0-8315-fe3c427ce2f5 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-07-06 20:02:54.429652 | orchestrator | 2025-07-06 20:02:54.429753 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-06 20:02:54.429767 | orchestrator | 2025-07-06 20:02:54.429778 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 20:02:54.429792 | orchestrator | Sunday 06 July 2025 20:02:45 +0000 (0:00:00.342) 0:00:00.342 *********** 2025-07-06 20:02:54.429803 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:02:54.429815 | orchestrator | 2025-07-06 20:02:54.429829 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 20:02:54.429840 | orchestrator | Sunday 06 July 2025 20:02:46 +0000 (0:00:00.326) 0:00:00.669 *********** 2025-07-06 20:02:54.429851 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:02:54.429863 | orchestrator | 2025-07-06 20:02:54.429875 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.429886 | orchestrator | Sunday 06 July 2025 20:02:46 +0000 (0:00:00.250) 0:00:00.919 *********** 2025-07-06 20:02:54.429897 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-06 20:02:54.429908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-06 20:02:54.429919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-06 20:02:54.429930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-06 20:02:54.429940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-06 20:02:54.429951 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-06 20:02:54.429962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-06 20:02:54.429973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-06 20:02:54.429984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-06 20:02:54.429994 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-06 20:02:54.430005 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-06 20:02:54.430085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-06 20:02:54.430097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-06 20:02:54.430108 | orchestrator | 2025-07-06 20:02:54.430120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430131 | orchestrator | Sunday 06 July 2025 20:02:46 +0000 (0:00:00.417) 0:00:01.336 *********** 2025-07-06 20:02:54.430142 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.430153 | orchestrator | 2025-07-06 20:02:54.430271 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430296 | orchestrator | Sunday 06 July 2025 20:02:47 +0000 (0:00:00.570) 0:00:01.907 *********** 2025-07-06 20:02:54.430309 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.430322 | orchestrator | 2025-07-06 20:02:54.430358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430372 | orchestrator | Sunday 06 July 2025 20:02:47 +0000 (0:00:00.208) 0:00:02.116 *********** 2025-07-06 20:02:54.430385 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.430397 | orchestrator | 2025-07-06 20:02:54.430410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430422 | orchestrator | Sunday 06 July 2025 20:02:47 +0000 (0:00:00.216) 0:00:02.332 *********** 2025-07-06 20:02:54.430435 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.430447 | orchestrator | 2025-07-06 20:02:54.430459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430472 | orchestrator | Sunday 06 July 2025 20:02:48 +0000 (0:00:00.229) 0:00:02.562 *********** 2025-07-06 20:02:54.430484 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.430497 | orchestrator | 2025-07-06 20:02:54.430510 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430523 | orchestrator | Sunday 06 July 2025 20:02:48 +0000 (0:00:00.200) 0:00:02.762 *********** 2025-07-06 20:02:54.430533 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.430544 | orchestrator | 2025-07-06 20:02:54.430555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430566 | orchestrator | Sunday 06 July 2025 20:02:48 +0000 (0:00:00.212) 0:00:02.975 *********** 2025-07-06 20:02:54.430576 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.430587 | orchestrator | 2025-07-06 20:02:54.430598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430609 | orchestrator | Sunday 06 July 2025 20:02:48 +0000 (0:00:00.207) 0:00:03.182 *********** 2025-07-06 20:02:54.430620 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.430630 | orchestrator | 2025-07-06 20:02:54.430641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430652 | orchestrator | Sunday 06 July 2025 20:02:49 +0000 (0:00:00.195) 0:00:03.378 *********** 2025-07-06 20:02:54.430663 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838) 2025-07-06 20:02:54.430675 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838) 2025-07-06 20:02:54.430686 | orchestrator | 2025-07-06 20:02:54.430697 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430707 | orchestrator | Sunday 06 July 2025 20:02:49 +0000 (0:00:00.497) 0:00:03.876 *********** 2025-07-06 20:02:54.430739 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b) 2025-07-06 20:02:54.430750 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b) 2025-07-06 20:02:54.430761 | orchestrator | 2025-07-06 20:02:54.430772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430783 | orchestrator | Sunday 06 July 2025 20:02:50 +0000 (0:00:00.545) 0:00:04.422 *********** 2025-07-06 20:02:54.430794 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111) 2025-07-06 20:02:54.430805 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111) 2025-07-06 20:02:54.430815 | orchestrator | 2025-07-06 20:02:54.430826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430837 | orchestrator | Sunday 06 July 2025 20:02:50 +0000 (0:00:00.642) 0:00:05.064 *********** 2025-07-06 20:02:54.430847 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd) 2025-07-06 20:02:54.430858 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd) 2025-07-06 20:02:54.430868 | orchestrator | 2025-07-06 20:02:54.430879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:02:54.430890 | orchestrator | Sunday 06 July 2025 20:02:51 +0000 (0:00:00.694) 0:00:05.758 *********** 2025-07-06 20:02:54.430909 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 20:02:54.430919 | orchestrator | 2025-07-06 20:02:54.430930 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:02:54.430941 | orchestrator | Sunday 06 July 2025 20:02:52 +0000 (0:00:00.896) 0:00:06.655 *********** 2025-07-06 20:02:54.430951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-06 20:02:54.430962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-06 20:02:54.430972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-06 20:02:54.430983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-06 20:02:54.430994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-06 20:02:54.431004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-06 20:02:54.431015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-06 20:02:54.431026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-06 20:02:54.431036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-06 20:02:54.431047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-06 20:02:54.431057 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-06 20:02:54.431068 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-06 20:02:54.431084 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-06 20:02:54.431095 | orchestrator | 2025-07-06 20:02:54.431106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:02:54.431117 | orchestrator | Sunday 06 July 2025 20:02:52 +0000 (0:00:00.437) 0:00:07.093 *********** 2025-07-06 20:02:54.431142 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.431177 | orchestrator | 2025-07-06 20:02:54.431194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:02:54.431205 | orchestrator | Sunday 06 July 2025 20:02:52 +0000 (0:00:00.217) 0:00:07.310 *********** 2025-07-06 20:02:54.431216 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.431226 | orchestrator | 2025-07-06 20:02:54.431237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:02:54.431247 | orchestrator | Sunday 06 July 2025 20:02:53 +0000 (0:00:00.223) 0:00:07.534 *********** 2025-07-06 20:02:54.431258 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.431269 | orchestrator | 2025-07-06 20:02:54.431279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:02:54.431290 | orchestrator | Sunday 06 July 2025 20:02:53 +0000 (0:00:00.232) 0:00:07.767 *********** 2025-07-06 20:02:54.431301 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.431311 | orchestrator | 2025-07-06 20:02:54.431322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:02:54.431332 | orchestrator | Sunday 06 July 2025 20:02:53 +0000 (0:00:00.199) 0:00:07.966 *********** 2025-07-06 20:02:54.431343 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.431354 | orchestrator | 2025-07-06 20:02:54.431365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:02:54.431375 | orchestrator | Sunday 06 July 2025 20:02:53 +0000 (0:00:00.187) 0:00:08.154 *********** 2025-07-06 20:02:54.431386 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.431396 | orchestrator | 2025-07-06 20:02:54.431407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:02:54.431425 | orchestrator | Sunday 06 July 2025 20:02:53 +0000 (0:00:00.197) 0:00:08.351 *********** 2025-07-06 20:02:54.431436 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:02:54.431447 | orchestrator | 2025-07-06 20:02:54.431458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:02:54.431468 | orchestrator | Sunday 06 July 2025 20:02:54 +0000 (0:00:00.198) 0:00:08.550 *********** 2025-07-06 20:02:54.431489 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.601088 | orchestrator | 2025-07-06 20:03:02.601238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:02.601258 | orchestrator | Sunday 06 July 2025 20:02:54 +0000 (0:00:00.234) 0:00:08.785 *********** 2025-07-06 20:03:02.601270 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-06 20:03:02.601283 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-06 20:03:02.601295 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-06 20:03:02.601305 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-06 20:03:02.601317 | orchestrator | 2025-07-06 20:03:02.601328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:02.601339 | orchestrator | Sunday 06 July 2025 20:02:55 +0000 (0:00:01.179) 0:00:09.964 *********** 2025-07-06 20:03:02.601351 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.601362 | orchestrator | 2025-07-06 20:03:02.601373 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:02.601384 | orchestrator | Sunday 06 July 2025 20:02:55 +0000 (0:00:00.215) 0:00:10.179 *********** 2025-07-06 20:03:02.601394 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.601405 | orchestrator | 2025-07-06 20:03:02.601416 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:02.601427 | orchestrator | Sunday 06 July 2025 20:02:56 +0000 (0:00:00.216) 0:00:10.396 *********** 2025-07-06 20:03:02.601437 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.601448 | orchestrator | 2025-07-06 20:03:02.601477 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:02.601489 | orchestrator | Sunday 06 July 2025 20:02:56 +0000 (0:00:00.237) 0:00:10.633 *********** 2025-07-06 20:03:02.601500 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.601510 | orchestrator | 2025-07-06 20:03:02.601521 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-06 20:03:02.601532 | orchestrator | Sunday 06 July 2025 20:02:56 +0000 (0:00:00.222) 0:00:10.856 *********** 2025-07-06 20:03:02.601543 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-07-06 20:03:02.601554 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-07-06 20:03:02.601565 | orchestrator | 2025-07-06 20:03:02.601576 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-06 20:03:02.601587 | orchestrator | Sunday 06 July 2025 20:02:56 +0000 (0:00:00.174) 0:00:11.030 *********** 2025-07-06 20:03:02.601598 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.601609 | orchestrator | 2025-07-06 20:03:02.601623 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-06 20:03:02.601636 | orchestrator | Sunday 06 July 2025 20:02:56 +0000 (0:00:00.160) 0:00:11.191 *********** 2025-07-06 20:03:02.601650 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.601663 | orchestrator | 2025-07-06 20:03:02.601676 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-06 20:03:02.601689 | orchestrator | Sunday 06 July 2025 20:02:56 +0000 (0:00:00.156) 0:00:11.347 *********** 2025-07-06 20:03:02.601702 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.601715 | orchestrator | 2025-07-06 20:03:02.601728 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-06 20:03:02.601741 | orchestrator | Sunday 06 July 2025 20:02:57 +0000 (0:00:00.150) 0:00:11.497 *********** 2025-07-06 20:03:02.601754 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:03:02.601767 | orchestrator | 2025-07-06 20:03:02.601780 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-06 20:03:02.601820 | orchestrator | Sunday 06 July 2025 20:02:57 +0000 (0:00:00.156) 0:00:11.653 *********** 2025-07-06 20:03:02.601835 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'}}) 2025-07-06 20:03:02.601849 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'}}) 2025-07-06 20:03:02.601862 | orchestrator | 2025-07-06 20:03:02.601875 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-06 20:03:02.601888 | orchestrator | Sunday 06 July 2025 20:02:57 +0000 (0:00:00.198) 0:00:11.851 *********** 2025-07-06 20:03:02.601901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'}})  2025-07-06 20:03:02.601922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'}})  2025-07-06 20:03:02.601937 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.601951 | orchestrator | 2025-07-06 20:03:02.601964 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-06 20:03:02.601975 | orchestrator | Sunday 06 July 2025 20:02:57 +0000 (0:00:00.179) 0:00:12.031 *********** 2025-07-06 20:03:02.601986 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'}})  2025-07-06 20:03:02.601997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'}})  2025-07-06 20:03:02.602008 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.602118 | orchestrator | 2025-07-06 20:03:02.602157 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-06 20:03:02.602169 | orchestrator | Sunday 06 July 2025 20:02:57 +0000 (0:00:00.172) 0:00:12.204 *********** 2025-07-06 20:03:02.602180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'}})  2025-07-06 20:03:02.602192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'}})  2025-07-06 20:03:02.602203 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.602213 | orchestrator | 2025-07-06 20:03:02.602244 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-06 20:03:02.602255 | orchestrator | Sunday 06 July 2025 20:02:58 +0000 (0:00:00.396) 0:00:12.601 *********** 2025-07-06 20:03:02.602266 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:03:02.602277 | orchestrator | 2025-07-06 20:03:02.602288 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-06 20:03:02.602298 | orchestrator | Sunday 06 July 2025 20:02:58 +0000 (0:00:00.206) 0:00:12.808 *********** 2025-07-06 20:03:02.602309 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:03:02.602320 | orchestrator | 2025-07-06 20:03:02.602331 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-06 20:03:02.602341 | orchestrator | Sunday 06 July 2025 20:02:58 +0000 (0:00:00.159) 0:00:12.967 *********** 2025-07-06 20:03:02.602352 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.602363 | orchestrator | 2025-07-06 20:03:02.602373 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-06 20:03:02.602384 | orchestrator | Sunday 06 July 2025 20:02:58 +0000 (0:00:00.143) 0:00:13.110 *********** 2025-07-06 20:03:02.602395 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.602405 | orchestrator | 2025-07-06 20:03:02.602416 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-06 20:03:02.602427 | orchestrator | Sunday 06 July 2025 20:02:58 +0000 (0:00:00.175) 0:00:13.286 *********** 2025-07-06 20:03:02.602438 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.602448 | orchestrator | 2025-07-06 20:03:02.602459 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-06 20:03:02.602481 | orchestrator | Sunday 06 July 2025 20:02:59 +0000 (0:00:00.159) 0:00:13.445 *********** 2025-07-06 20:03:02.602492 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 20:03:02.602502 | orchestrator |  "ceph_osd_devices": { 2025-07-06 20:03:02.602513 | orchestrator |  "sdb": { 2025-07-06 20:03:02.602524 | orchestrator |  "osd_lvm_uuid": "22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09" 2025-07-06 20:03:02.602535 | orchestrator |  }, 2025-07-06 20:03:02.602546 | orchestrator |  "sdc": { 2025-07-06 20:03:02.602557 | orchestrator |  "osd_lvm_uuid": "1256d0fb-e60f-50ff-afd8-4edc5f2c0a15" 2025-07-06 20:03:02.602567 | orchestrator |  } 2025-07-06 20:03:02.602578 | orchestrator |  } 2025-07-06 20:03:02.602589 | orchestrator | } 2025-07-06 20:03:02.602600 | orchestrator | 2025-07-06 20:03:02.602611 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-06 20:03:02.602621 | orchestrator | Sunday 06 July 2025 20:02:59 +0000 (0:00:00.151) 0:00:13.597 *********** 2025-07-06 20:03:02.602632 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.602643 | orchestrator | 2025-07-06 20:03:02.602653 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-06 20:03:02.602664 | orchestrator | Sunday 06 July 2025 20:02:59 +0000 (0:00:00.152) 0:00:13.750 *********** 2025-07-06 20:03:02.602675 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.602685 | orchestrator | 2025-07-06 20:03:02.602696 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-06 20:03:02.602707 | orchestrator | Sunday 06 July 2025 20:02:59 +0000 (0:00:00.151) 0:00:13.902 *********** 2025-07-06 20:03:02.602717 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:03:02.602728 | orchestrator | 2025-07-06 20:03:02.602738 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-06 20:03:02.602749 | orchestrator | Sunday 06 July 2025 20:02:59 +0000 (0:00:00.131) 0:00:14.033 *********** 2025-07-06 20:03:02.602759 | orchestrator | changed: [testbed-node-3] => { 2025-07-06 20:03:02.602770 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-06 20:03:02.602781 | orchestrator |  "ceph_osd_devices": { 2025-07-06 20:03:02.602791 | orchestrator |  "sdb": { 2025-07-06 20:03:02.602802 | orchestrator |  "osd_lvm_uuid": "22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09" 2025-07-06 20:03:02.602813 | orchestrator |  }, 2025-07-06 20:03:02.602824 | orchestrator |  "sdc": { 2025-07-06 20:03:02.602834 | orchestrator |  "osd_lvm_uuid": "1256d0fb-e60f-50ff-afd8-4edc5f2c0a15" 2025-07-06 20:03:02.602845 | orchestrator |  } 2025-07-06 20:03:02.602860 | orchestrator |  }, 2025-07-06 20:03:02.602871 | orchestrator |  "lvm_volumes": [ 2025-07-06 20:03:02.602882 | orchestrator |  { 2025-07-06 20:03:02.602893 | orchestrator |  "data": "osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09", 2025-07-06 20:03:02.602904 | orchestrator |  "data_vg": "ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09" 2025-07-06 20:03:02.602914 | orchestrator |  }, 2025-07-06 20:03:02.602925 | orchestrator |  { 2025-07-06 20:03:02.602936 | orchestrator |  "data": "osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15", 2025-07-06 20:03:02.602947 | orchestrator |  "data_vg": "ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15" 2025-07-06 20:03:02.602957 | orchestrator |  } 2025-07-06 20:03:02.602968 | orchestrator |  ] 2025-07-06 20:03:02.602979 | orchestrator |  } 2025-07-06 20:03:02.603003 | orchestrator | } 2025-07-06 20:03:02.603015 | orchestrator | 2025-07-06 20:03:02.603026 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-06 20:03:02.603037 | orchestrator | Sunday 06 July 2025 20:02:59 +0000 (0:00:00.195) 0:00:14.229 *********** 2025-07-06 20:03:02.603047 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:03:02.603058 | orchestrator | 2025-07-06 20:03:02.603069 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-06 20:03:02.603080 | orchestrator | 2025-07-06 20:03:02.603097 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 20:03:02.603108 | orchestrator | Sunday 06 July 2025 20:03:02 +0000 (0:00:02.261) 0:00:16.490 *********** 2025-07-06 20:03:02.603118 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-06 20:03:02.603129 | orchestrator | 2025-07-06 20:03:02.603161 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 20:03:02.603172 | orchestrator | Sunday 06 July 2025 20:03:02 +0000 (0:00:00.244) 0:00:16.735 *********** 2025-07-06 20:03:02.603183 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:03:02.603193 | orchestrator | 2025-07-06 20:03:02.603204 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:02.603223 | orchestrator | Sunday 06 July 2025 20:03:02 +0000 (0:00:00.218) 0:00:16.954 *********** 2025-07-06 20:03:10.857791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-06 20:03:10.857902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-06 20:03:10.857918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-06 20:03:10.857930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-06 20:03:10.857941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-06 20:03:10.857952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-06 20:03:10.857963 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-06 20:03:10.857973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-06 20:03:10.858000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-06 20:03:10.858011 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-06 20:03:10.858098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-06 20:03:10.858110 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-06 20:03:10.858177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-06 20:03:10.858189 | orchestrator | 2025-07-06 20:03:10.858201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858213 | orchestrator | Sunday 06 July 2025 20:03:02 +0000 (0:00:00.381) 0:00:17.336 *********** 2025-07-06 20:03:10.858236 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.858248 | orchestrator | 2025-07-06 20:03:10.858260 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858271 | orchestrator | Sunday 06 July 2025 20:03:03 +0000 (0:00:00.230) 0:00:17.566 *********** 2025-07-06 20:03:10.858282 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.858293 | orchestrator | 2025-07-06 20:03:10.858304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858317 | orchestrator | Sunday 06 July 2025 20:03:03 +0000 (0:00:00.201) 0:00:17.767 *********** 2025-07-06 20:03:10.858331 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.858344 | orchestrator | 2025-07-06 20:03:10.858357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858370 | orchestrator | Sunday 06 July 2025 20:03:03 +0000 (0:00:00.200) 0:00:17.968 *********** 2025-07-06 20:03:10.858382 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.858395 | orchestrator | 2025-07-06 20:03:10.858407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858419 | orchestrator | Sunday 06 July 2025 20:03:03 +0000 (0:00:00.227) 0:00:18.196 *********** 2025-07-06 20:03:10.858432 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.858445 | orchestrator | 2025-07-06 20:03:10.858457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858498 | orchestrator | Sunday 06 July 2025 20:03:04 +0000 (0:00:00.237) 0:00:18.434 *********** 2025-07-06 20:03:10.858511 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.858523 | orchestrator | 2025-07-06 20:03:10.858535 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858548 | orchestrator | Sunday 06 July 2025 20:03:04 +0000 (0:00:00.710) 0:00:19.145 *********** 2025-07-06 20:03:10.858560 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.858573 | orchestrator | 2025-07-06 20:03:10.858585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858598 | orchestrator | Sunday 06 July 2025 20:03:04 +0000 (0:00:00.215) 0:00:19.361 *********** 2025-07-06 20:03:10.858628 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.858642 | orchestrator | 2025-07-06 20:03:10.858654 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858667 | orchestrator | Sunday 06 July 2025 20:03:05 +0000 (0:00:00.226) 0:00:19.587 *********** 2025-07-06 20:03:10.858679 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8) 2025-07-06 20:03:10.858693 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8) 2025-07-06 20:03:10.858703 | orchestrator | 2025-07-06 20:03:10.858714 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858725 | orchestrator | Sunday 06 July 2025 20:03:05 +0000 (0:00:00.441) 0:00:20.029 *********** 2025-07-06 20:03:10.858736 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719) 2025-07-06 20:03:10.858746 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719) 2025-07-06 20:03:10.858757 | orchestrator | 2025-07-06 20:03:10.858769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858779 | orchestrator | Sunday 06 July 2025 20:03:06 +0000 (0:00:00.469) 0:00:20.498 *********** 2025-07-06 20:03:10.858790 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929) 2025-07-06 20:03:10.858801 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929) 2025-07-06 20:03:10.858811 | orchestrator | 2025-07-06 20:03:10.858822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858833 | orchestrator | Sunday 06 July 2025 20:03:06 +0000 (0:00:00.425) 0:00:20.924 *********** 2025-07-06 20:03:10.858864 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48) 2025-07-06 20:03:10.858876 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48) 2025-07-06 20:03:10.858886 | orchestrator | 2025-07-06 20:03:10.858897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:10.858908 | orchestrator | Sunday 06 July 2025 20:03:07 +0000 (0:00:00.471) 0:00:21.395 *********** 2025-07-06 20:03:10.858919 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 20:03:10.858930 | orchestrator | 2025-07-06 20:03:10.858940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.858951 | orchestrator | Sunday 06 July 2025 20:03:07 +0000 (0:00:00.356) 0:00:21.752 *********** 2025-07-06 20:03:10.858962 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-06 20:03:10.858973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-06 20:03:10.858983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-06 20:03:10.858994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-06 20:03:10.859004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-06 20:03:10.859025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-06 20:03:10.859035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-06 20:03:10.859046 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-06 20:03:10.859056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-06 20:03:10.859067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-06 20:03:10.859078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-06 20:03:10.859089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-06 20:03:10.859099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-06 20:03:10.859110 | orchestrator | 2025-07-06 20:03:10.859144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.859155 | orchestrator | Sunday 06 July 2025 20:03:07 +0000 (0:00:00.405) 0:00:22.158 *********** 2025-07-06 20:03:10.859166 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.859177 | orchestrator | 2025-07-06 20:03:10.859188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.859198 | orchestrator | Sunday 06 July 2025 20:03:08 +0000 (0:00:00.220) 0:00:22.378 *********** 2025-07-06 20:03:10.859209 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.859220 | orchestrator | 2025-07-06 20:03:10.859230 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.859241 | orchestrator | Sunday 06 July 2025 20:03:08 +0000 (0:00:00.635) 0:00:23.014 *********** 2025-07-06 20:03:10.859252 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.859262 | orchestrator | 2025-07-06 20:03:10.859273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.859284 | orchestrator | Sunday 06 July 2025 20:03:08 +0000 (0:00:00.211) 0:00:23.226 *********** 2025-07-06 20:03:10.859294 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.859305 | orchestrator | 2025-07-06 20:03:10.859316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.859326 | orchestrator | Sunday 06 July 2025 20:03:09 +0000 (0:00:00.214) 0:00:23.441 *********** 2025-07-06 20:03:10.859337 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.859348 | orchestrator | 2025-07-06 20:03:10.859365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.859376 | orchestrator | Sunday 06 July 2025 20:03:09 +0000 (0:00:00.223) 0:00:23.665 *********** 2025-07-06 20:03:10.859387 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.859397 | orchestrator | 2025-07-06 20:03:10.859408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.859419 | orchestrator | Sunday 06 July 2025 20:03:09 +0000 (0:00:00.222) 0:00:23.887 *********** 2025-07-06 20:03:10.859429 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.859440 | orchestrator | 2025-07-06 20:03:10.859451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.859461 | orchestrator | Sunday 06 July 2025 20:03:09 +0000 (0:00:00.199) 0:00:24.087 *********** 2025-07-06 20:03:10.859472 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.859483 | orchestrator | 2025-07-06 20:03:10.859494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.859504 | orchestrator | Sunday 06 July 2025 20:03:09 +0000 (0:00:00.213) 0:00:24.301 *********** 2025-07-06 20:03:10.859515 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-06 20:03:10.859527 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-06 20:03:10.859538 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-06 20:03:10.859556 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-06 20:03:10.859567 | orchestrator | 2025-07-06 20:03:10.859578 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:10.859589 | orchestrator | Sunday 06 July 2025 20:03:10 +0000 (0:00:00.697) 0:00:24.999 *********** 2025-07-06 20:03:10.859599 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:10.859610 | orchestrator | 2025-07-06 20:03:10.859628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:17.584671 | orchestrator | Sunday 06 July 2025 20:03:10 +0000 (0:00:00.216) 0:00:25.216 *********** 2025-07-06 20:03:17.584784 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.584802 | orchestrator | 2025-07-06 20:03:17.584815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:17.584826 | orchestrator | Sunday 06 July 2025 20:03:11 +0000 (0:00:00.190) 0:00:25.406 *********** 2025-07-06 20:03:17.584837 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.584848 | orchestrator | 2025-07-06 20:03:17.584859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:17.584870 | orchestrator | Sunday 06 July 2025 20:03:11 +0000 (0:00:00.214) 0:00:25.621 *********** 2025-07-06 20:03:17.584881 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.584893 | orchestrator | 2025-07-06 20:03:17.584904 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-06 20:03:17.584930 | orchestrator | Sunday 06 July 2025 20:03:11 +0000 (0:00:00.237) 0:00:25.859 *********** 2025-07-06 20:03:17.584941 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-07-06 20:03:17.584952 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-07-06 20:03:17.584963 | orchestrator | 2025-07-06 20:03:17.584974 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-06 20:03:17.584984 | orchestrator | Sunday 06 July 2025 20:03:11 +0000 (0:00:00.378) 0:00:26.238 *********** 2025-07-06 20:03:17.584995 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.585006 | orchestrator | 2025-07-06 20:03:17.585017 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-06 20:03:17.585028 | orchestrator | Sunday 06 July 2025 20:03:12 +0000 (0:00:00.156) 0:00:26.394 *********** 2025-07-06 20:03:17.585039 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.585050 | orchestrator | 2025-07-06 20:03:17.585071 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-06 20:03:17.585083 | orchestrator | Sunday 06 July 2025 20:03:12 +0000 (0:00:00.161) 0:00:26.556 *********** 2025-07-06 20:03:17.585094 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.585127 | orchestrator | 2025-07-06 20:03:17.585138 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-06 20:03:17.585149 | orchestrator | Sunday 06 July 2025 20:03:12 +0000 (0:00:00.152) 0:00:26.709 *********** 2025-07-06 20:03:17.585160 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:03:17.585172 | orchestrator | 2025-07-06 20:03:17.585183 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-06 20:03:17.585194 | orchestrator | Sunday 06 July 2025 20:03:12 +0000 (0:00:00.154) 0:00:26.864 *********** 2025-07-06 20:03:17.585207 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31ad454b-c5b7-54ad-acab-5839a456146b'}}) 2025-07-06 20:03:17.585220 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2eb0e424-9f58-550c-b8cf-76c1b52e517a'}}) 2025-07-06 20:03:17.585233 | orchestrator | 2025-07-06 20:03:17.585246 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-06 20:03:17.585259 | orchestrator | Sunday 06 July 2025 20:03:12 +0000 (0:00:00.182) 0:00:27.046 *********** 2025-07-06 20:03:17.585272 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31ad454b-c5b7-54ad-acab-5839a456146b'}})  2025-07-06 20:03:17.585286 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2eb0e424-9f58-550c-b8cf-76c1b52e517a'}})  2025-07-06 20:03:17.585327 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.585340 | orchestrator | 2025-07-06 20:03:17.585353 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-06 20:03:17.585366 | orchestrator | Sunday 06 July 2025 20:03:12 +0000 (0:00:00.169) 0:00:27.216 *********** 2025-07-06 20:03:17.585379 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31ad454b-c5b7-54ad-acab-5839a456146b'}})  2025-07-06 20:03:17.585391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2eb0e424-9f58-550c-b8cf-76c1b52e517a'}})  2025-07-06 20:03:17.585404 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.585416 | orchestrator | 2025-07-06 20:03:17.585429 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-06 20:03:17.585442 | orchestrator | Sunday 06 July 2025 20:03:13 +0000 (0:00:00.179) 0:00:27.395 *********** 2025-07-06 20:03:17.585455 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31ad454b-c5b7-54ad-acab-5839a456146b'}})  2025-07-06 20:03:17.585468 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2eb0e424-9f58-550c-b8cf-76c1b52e517a'}})  2025-07-06 20:03:17.585481 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.585493 | orchestrator | 2025-07-06 20:03:17.585506 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-06 20:03:17.585539 | orchestrator | Sunday 06 July 2025 20:03:13 +0000 (0:00:00.186) 0:00:27.582 *********** 2025-07-06 20:03:17.585553 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:03:17.585565 | orchestrator | 2025-07-06 20:03:17.585579 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-06 20:03:17.585590 | orchestrator | Sunday 06 July 2025 20:03:13 +0000 (0:00:00.130) 0:00:27.712 *********** 2025-07-06 20:03:17.585601 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:03:17.585612 | orchestrator | 2025-07-06 20:03:17.585622 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-06 20:03:17.585633 | orchestrator | Sunday 06 July 2025 20:03:13 +0000 (0:00:00.184) 0:00:27.897 *********** 2025-07-06 20:03:17.585644 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.585655 | orchestrator | 2025-07-06 20:03:17.585684 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-06 20:03:17.585695 | orchestrator | Sunday 06 July 2025 20:03:13 +0000 (0:00:00.170) 0:00:28.067 *********** 2025-07-06 20:03:17.585706 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.585717 | orchestrator | 2025-07-06 20:03:17.585727 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-06 20:03:17.585742 | orchestrator | Sunday 06 July 2025 20:03:14 +0000 (0:00:00.359) 0:00:28.426 *********** 2025-07-06 20:03:17.585760 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.585778 | orchestrator | 2025-07-06 20:03:17.585805 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-06 20:03:17.585825 | orchestrator | Sunday 06 July 2025 20:03:14 +0000 (0:00:00.145) 0:00:28.572 *********** 2025-07-06 20:03:17.585843 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:03:17.585861 | orchestrator |  "ceph_osd_devices": { 2025-07-06 20:03:17.585880 | orchestrator |  "sdb": { 2025-07-06 20:03:17.585897 | orchestrator |  "osd_lvm_uuid": "31ad454b-c5b7-54ad-acab-5839a456146b" 2025-07-06 20:03:17.585915 | orchestrator |  }, 2025-07-06 20:03:17.585927 | orchestrator |  "sdc": { 2025-07-06 20:03:17.585938 | orchestrator |  "osd_lvm_uuid": "2eb0e424-9f58-550c-b8cf-76c1b52e517a" 2025-07-06 20:03:17.585948 | orchestrator |  } 2025-07-06 20:03:17.585959 | orchestrator |  } 2025-07-06 20:03:17.585969 | orchestrator | } 2025-07-06 20:03:17.585980 | orchestrator | 2025-07-06 20:03:17.585991 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-06 20:03:17.586001 | orchestrator | Sunday 06 July 2025 20:03:14 +0000 (0:00:00.158) 0:00:28.730 *********** 2025-07-06 20:03:17.586116 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.586132 | orchestrator | 2025-07-06 20:03:17.586143 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-06 20:03:17.586154 | orchestrator | Sunday 06 July 2025 20:03:14 +0000 (0:00:00.138) 0:00:28.869 *********** 2025-07-06 20:03:17.586165 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.586175 | orchestrator | 2025-07-06 20:03:17.586186 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-06 20:03:17.586196 | orchestrator | Sunday 06 July 2025 20:03:14 +0000 (0:00:00.132) 0:00:29.001 *********** 2025-07-06 20:03:17.586207 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:03:17.586217 | orchestrator | 2025-07-06 20:03:17.586228 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-06 20:03:17.586239 | orchestrator | Sunday 06 July 2025 20:03:14 +0000 (0:00:00.138) 0:00:29.140 *********** 2025-07-06 20:03:17.586249 | orchestrator | changed: [testbed-node-4] => { 2025-07-06 20:03:17.586259 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-06 20:03:17.586270 | orchestrator |  "ceph_osd_devices": { 2025-07-06 20:03:17.586280 | orchestrator |  "sdb": { 2025-07-06 20:03:17.586291 | orchestrator |  "osd_lvm_uuid": "31ad454b-c5b7-54ad-acab-5839a456146b" 2025-07-06 20:03:17.586302 | orchestrator |  }, 2025-07-06 20:03:17.586312 | orchestrator |  "sdc": { 2025-07-06 20:03:17.586323 | orchestrator |  "osd_lvm_uuid": "2eb0e424-9f58-550c-b8cf-76c1b52e517a" 2025-07-06 20:03:17.586333 | orchestrator |  } 2025-07-06 20:03:17.586344 | orchestrator |  }, 2025-07-06 20:03:17.586354 | orchestrator |  "lvm_volumes": [ 2025-07-06 20:03:17.586365 | orchestrator |  { 2025-07-06 20:03:17.586375 | orchestrator |  "data": "osd-block-31ad454b-c5b7-54ad-acab-5839a456146b", 2025-07-06 20:03:17.586386 | orchestrator |  "data_vg": "ceph-31ad454b-c5b7-54ad-acab-5839a456146b" 2025-07-06 20:03:17.586397 | orchestrator |  }, 2025-07-06 20:03:17.586407 | orchestrator |  { 2025-07-06 20:03:17.586417 | orchestrator |  "data": "osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a", 2025-07-06 20:03:17.586428 | orchestrator |  "data_vg": "ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a" 2025-07-06 20:03:17.586438 | orchestrator |  } 2025-07-06 20:03:17.586449 | orchestrator |  ] 2025-07-06 20:03:17.586460 | orchestrator |  } 2025-07-06 20:03:17.586470 | orchestrator | } 2025-07-06 20:03:17.586481 | orchestrator | 2025-07-06 20:03:17.586491 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-06 20:03:17.586502 | orchestrator | Sunday 06 July 2025 20:03:14 +0000 (0:00:00.209) 0:00:29.349 *********** 2025-07-06 20:03:17.586513 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-06 20:03:17.586523 | orchestrator | 2025-07-06 20:03:17.586534 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-06 20:03:17.586544 | orchestrator | 2025-07-06 20:03:17.586555 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 20:03:17.586565 | orchestrator | Sunday 06 July 2025 20:03:16 +0000 (0:00:01.099) 0:00:30.449 *********** 2025-07-06 20:03:17.586576 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-06 20:03:17.586586 | orchestrator | 2025-07-06 20:03:17.586597 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 20:03:17.586607 | orchestrator | Sunday 06 July 2025 20:03:16 +0000 (0:00:00.482) 0:00:30.932 *********** 2025-07-06 20:03:17.586618 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:03:17.586628 | orchestrator | 2025-07-06 20:03:17.586639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:17.586650 | orchestrator | Sunday 06 July 2025 20:03:17 +0000 (0:00:00.632) 0:00:31.564 *********** 2025-07-06 20:03:17.586660 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-06 20:03:17.586679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-06 20:03:17.586690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-06 20:03:17.586701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-06 20:03:17.586711 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-06 20:03:17.586722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-06 20:03:17.586743 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-06 20:03:25.763524 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-06 20:03:25.763658 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-06 20:03:25.763673 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-06 20:03:25.763685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-06 20:03:25.763697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-06 20:03:25.763707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-06 20:03:25.763719 | orchestrator | 2025-07-06 20:03:25.763732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.763763 | orchestrator | Sunday 06 July 2025 20:03:17 +0000 (0:00:00.372) 0:00:31.936 *********** 2025-07-06 20:03:25.763775 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.763787 | orchestrator | 2025-07-06 20:03:25.763798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.763810 | orchestrator | Sunday 06 July 2025 20:03:17 +0000 (0:00:00.184) 0:00:32.121 *********** 2025-07-06 20:03:25.763821 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.763832 | orchestrator | 2025-07-06 20:03:25.763843 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.763854 | orchestrator | Sunday 06 July 2025 20:03:17 +0000 (0:00:00.200) 0:00:32.322 *********** 2025-07-06 20:03:25.763865 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.763876 | orchestrator | 2025-07-06 20:03:25.763888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.763899 | orchestrator | Sunday 06 July 2025 20:03:18 +0000 (0:00:00.191) 0:00:32.513 *********** 2025-07-06 20:03:25.763910 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.763921 | orchestrator | 2025-07-06 20:03:25.763932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.763943 | orchestrator | Sunday 06 July 2025 20:03:18 +0000 (0:00:00.196) 0:00:32.710 *********** 2025-07-06 20:03:25.763954 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.763965 | orchestrator | 2025-07-06 20:03:25.763976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.763987 | orchestrator | Sunday 06 July 2025 20:03:18 +0000 (0:00:00.205) 0:00:32.916 *********** 2025-07-06 20:03:25.763998 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.764010 | orchestrator | 2025-07-06 20:03:25.764021 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.764032 | orchestrator | Sunday 06 July 2025 20:03:18 +0000 (0:00:00.182) 0:00:33.098 *********** 2025-07-06 20:03:25.764043 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.764054 | orchestrator | 2025-07-06 20:03:25.764065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.764077 | orchestrator | Sunday 06 July 2025 20:03:18 +0000 (0:00:00.206) 0:00:33.305 *********** 2025-07-06 20:03:25.764110 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.764121 | orchestrator | 2025-07-06 20:03:25.764132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.764235 | orchestrator | Sunday 06 July 2025 20:03:19 +0000 (0:00:00.202) 0:00:33.507 *********** 2025-07-06 20:03:25.764247 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5) 2025-07-06 20:03:25.764260 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5) 2025-07-06 20:03:25.764271 | orchestrator | 2025-07-06 20:03:25.764282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.764293 | orchestrator | Sunday 06 July 2025 20:03:19 +0000 (0:00:00.606) 0:00:34.114 *********** 2025-07-06 20:03:25.764304 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332) 2025-07-06 20:03:25.764315 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332) 2025-07-06 20:03:25.764326 | orchestrator | 2025-07-06 20:03:25.764336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.764347 | orchestrator | Sunday 06 July 2025 20:03:20 +0000 (0:00:00.938) 0:00:35.053 *********** 2025-07-06 20:03:25.764358 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a) 2025-07-06 20:03:25.764369 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a) 2025-07-06 20:03:25.764380 | orchestrator | 2025-07-06 20:03:25.764391 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.764402 | orchestrator | Sunday 06 July 2025 20:03:21 +0000 (0:00:00.406) 0:00:35.459 *********** 2025-07-06 20:03:25.764412 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5) 2025-07-06 20:03:25.764423 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5) 2025-07-06 20:03:25.764434 | orchestrator | 2025-07-06 20:03:25.764445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:03:25.764456 | orchestrator | Sunday 06 July 2025 20:03:21 +0000 (0:00:00.430) 0:00:35.890 *********** 2025-07-06 20:03:25.764466 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 20:03:25.764477 | orchestrator | 2025-07-06 20:03:25.764488 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.764499 | orchestrator | Sunday 06 July 2025 20:03:21 +0000 (0:00:00.329) 0:00:36.219 *********** 2025-07-06 20:03:25.764528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-06 20:03:25.764539 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-06 20:03:25.764550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-06 20:03:25.764561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-06 20:03:25.764572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-06 20:03:25.764582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-06 20:03:25.764593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-06 20:03:25.764604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-06 20:03:25.764614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-06 20:03:25.764625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-06 20:03:25.764636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-06 20:03:25.764646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-06 20:03:25.764665 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-06 20:03:25.764676 | orchestrator | 2025-07-06 20:03:25.764686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.764697 | orchestrator | Sunday 06 July 2025 20:03:22 +0000 (0:00:00.370) 0:00:36.590 *********** 2025-07-06 20:03:25.764708 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.764719 | orchestrator | 2025-07-06 20:03:25.764730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.764741 | orchestrator | Sunday 06 July 2025 20:03:22 +0000 (0:00:00.210) 0:00:36.800 *********** 2025-07-06 20:03:25.764752 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.764762 | orchestrator | 2025-07-06 20:03:25.764773 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.764784 | orchestrator | Sunday 06 July 2025 20:03:22 +0000 (0:00:00.195) 0:00:36.995 *********** 2025-07-06 20:03:25.764795 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.764805 | orchestrator | 2025-07-06 20:03:25.764816 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.764827 | orchestrator | Sunday 06 July 2025 20:03:22 +0000 (0:00:00.205) 0:00:37.201 *********** 2025-07-06 20:03:25.764837 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.764848 | orchestrator | 2025-07-06 20:03:25.764859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.764870 | orchestrator | Sunday 06 July 2025 20:03:23 +0000 (0:00:00.205) 0:00:37.406 *********** 2025-07-06 20:03:25.764880 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.764891 | orchestrator | 2025-07-06 20:03:25.764902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.764913 | orchestrator | Sunday 06 July 2025 20:03:23 +0000 (0:00:00.195) 0:00:37.601 *********** 2025-07-06 20:03:25.764923 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.764934 | orchestrator | 2025-07-06 20:03:25.764945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.764963 | orchestrator | Sunday 06 July 2025 20:03:23 +0000 (0:00:00.656) 0:00:38.258 *********** 2025-07-06 20:03:25.764975 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.764985 | orchestrator | 2025-07-06 20:03:25.764996 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.765007 | orchestrator | Sunday 06 July 2025 20:03:24 +0000 (0:00:00.208) 0:00:38.467 *********** 2025-07-06 20:03:25.765018 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.765029 | orchestrator | 2025-07-06 20:03:25.765040 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.765051 | orchestrator | Sunday 06 July 2025 20:03:24 +0000 (0:00:00.202) 0:00:38.670 *********** 2025-07-06 20:03:25.765062 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-06 20:03:25.765072 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-06 20:03:25.765101 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-06 20:03:25.765112 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-06 20:03:25.765123 | orchestrator | 2025-07-06 20:03:25.765134 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.765145 | orchestrator | Sunday 06 July 2025 20:03:24 +0000 (0:00:00.647) 0:00:39.318 *********** 2025-07-06 20:03:25.765155 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.765166 | orchestrator | 2025-07-06 20:03:25.765177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.765193 | orchestrator | Sunday 06 July 2025 20:03:25 +0000 (0:00:00.206) 0:00:39.524 *********** 2025-07-06 20:03:25.765204 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.765215 | orchestrator | 2025-07-06 20:03:25.765226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.765236 | orchestrator | Sunday 06 July 2025 20:03:25 +0000 (0:00:00.205) 0:00:39.729 *********** 2025-07-06 20:03:25.765253 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.765264 | orchestrator | 2025-07-06 20:03:25.765275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:03:25.765286 | orchestrator | Sunday 06 July 2025 20:03:25 +0000 (0:00:00.187) 0:00:39.916 *********** 2025-07-06 20:03:25.765297 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:25.765308 | orchestrator | 2025-07-06 20:03:25.765318 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-06 20:03:25.765335 | orchestrator | Sunday 06 July 2025 20:03:25 +0000 (0:00:00.204) 0:00:40.121 *********** 2025-07-06 20:03:29.963426 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-07-06 20:03:29.963523 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-07-06 20:03:29.963546 | orchestrator | 2025-07-06 20:03:29.963567 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-06 20:03:29.963586 | orchestrator | Sunday 06 July 2025 20:03:25 +0000 (0:00:00.169) 0:00:40.291 *********** 2025-07-06 20:03:29.963606 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.963625 | orchestrator | 2025-07-06 20:03:29.963647 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-06 20:03:29.963667 | orchestrator | Sunday 06 July 2025 20:03:26 +0000 (0:00:00.131) 0:00:40.422 *********** 2025-07-06 20:03:29.963687 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.963707 | orchestrator | 2025-07-06 20:03:29.963726 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-06 20:03:29.963746 | orchestrator | Sunday 06 July 2025 20:03:26 +0000 (0:00:00.146) 0:00:40.569 *********** 2025-07-06 20:03:29.963767 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.963787 | orchestrator | 2025-07-06 20:03:29.963808 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-06 20:03:29.963828 | orchestrator | Sunday 06 July 2025 20:03:26 +0000 (0:00:00.130) 0:00:40.699 *********** 2025-07-06 20:03:29.963847 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:03:29.963867 | orchestrator | 2025-07-06 20:03:29.963887 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-06 20:03:29.963908 | orchestrator | Sunday 06 July 2025 20:03:26 +0000 (0:00:00.354) 0:00:41.054 *********** 2025-07-06 20:03:29.963928 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fc1251bd-e592-50b3-b197-385f411a7339'}}) 2025-07-06 20:03:29.963948 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5f0fce0-432f-57fb-bebd-426658f60987'}}) 2025-07-06 20:03:29.963969 | orchestrator | 2025-07-06 20:03:29.963988 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-06 20:03:29.964010 | orchestrator | Sunday 06 July 2025 20:03:26 +0000 (0:00:00.179) 0:00:41.233 *********** 2025-07-06 20:03:29.964031 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fc1251bd-e592-50b3-b197-385f411a7339'}})  2025-07-06 20:03:29.964047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5f0fce0-432f-57fb-bebd-426658f60987'}})  2025-07-06 20:03:29.964060 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.964099 | orchestrator | 2025-07-06 20:03:29.964113 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-06 20:03:29.964127 | orchestrator | Sunday 06 July 2025 20:03:27 +0000 (0:00:00.162) 0:00:41.395 *********** 2025-07-06 20:03:29.964139 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fc1251bd-e592-50b3-b197-385f411a7339'}})  2025-07-06 20:03:29.964153 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5f0fce0-432f-57fb-bebd-426658f60987'}})  2025-07-06 20:03:29.964165 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.964179 | orchestrator | 2025-07-06 20:03:29.964191 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-06 20:03:29.964228 | orchestrator | Sunday 06 July 2025 20:03:27 +0000 (0:00:00.158) 0:00:41.553 *********** 2025-07-06 20:03:29.964241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fc1251bd-e592-50b3-b197-385f411a7339'}})  2025-07-06 20:03:29.964254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5f0fce0-432f-57fb-bebd-426658f60987'}})  2025-07-06 20:03:29.964267 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.964279 | orchestrator | 2025-07-06 20:03:29.964291 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-06 20:03:29.964304 | orchestrator | Sunday 06 July 2025 20:03:27 +0000 (0:00:00.160) 0:00:41.714 *********** 2025-07-06 20:03:29.964317 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:03:29.964330 | orchestrator | 2025-07-06 20:03:29.964341 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-06 20:03:29.964351 | orchestrator | Sunday 06 July 2025 20:03:27 +0000 (0:00:00.135) 0:00:41.850 *********** 2025-07-06 20:03:29.964362 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:03:29.964373 | orchestrator | 2025-07-06 20:03:29.964383 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-06 20:03:29.964394 | orchestrator | Sunday 06 July 2025 20:03:27 +0000 (0:00:00.143) 0:00:41.994 *********** 2025-07-06 20:03:29.964405 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.964415 | orchestrator | 2025-07-06 20:03:29.964426 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-06 20:03:29.964437 | orchestrator | Sunday 06 July 2025 20:03:27 +0000 (0:00:00.134) 0:00:42.128 *********** 2025-07-06 20:03:29.964448 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.964459 | orchestrator | 2025-07-06 20:03:29.964469 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-06 20:03:29.964480 | orchestrator | Sunday 06 July 2025 20:03:27 +0000 (0:00:00.130) 0:00:42.259 *********** 2025-07-06 20:03:29.964491 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.964502 | orchestrator | 2025-07-06 20:03:29.964513 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-06 20:03:29.964524 | orchestrator | Sunday 06 July 2025 20:03:28 +0000 (0:00:00.125) 0:00:42.384 *********** 2025-07-06 20:03:29.964535 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:03:29.964546 | orchestrator |  "ceph_osd_devices": { 2025-07-06 20:03:29.964557 | orchestrator |  "sdb": { 2025-07-06 20:03:29.964573 | orchestrator |  "osd_lvm_uuid": "fc1251bd-e592-50b3-b197-385f411a7339" 2025-07-06 20:03:29.964602 | orchestrator |  }, 2025-07-06 20:03:29.964614 | orchestrator |  "sdc": { 2025-07-06 20:03:29.964626 | orchestrator |  "osd_lvm_uuid": "b5f0fce0-432f-57fb-bebd-426658f60987" 2025-07-06 20:03:29.964645 | orchestrator |  } 2025-07-06 20:03:29.964663 | orchestrator |  } 2025-07-06 20:03:29.964681 | orchestrator | } 2025-07-06 20:03:29.964700 | orchestrator | 2025-07-06 20:03:29.964729 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-06 20:03:29.964748 | orchestrator | Sunday 06 July 2025 20:03:28 +0000 (0:00:00.127) 0:00:42.512 *********** 2025-07-06 20:03:29.964766 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.964785 | orchestrator | 2025-07-06 20:03:29.964803 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-06 20:03:29.964822 | orchestrator | Sunday 06 July 2025 20:03:28 +0000 (0:00:00.136) 0:00:42.648 *********** 2025-07-06 20:03:29.964841 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.964861 | orchestrator | 2025-07-06 20:03:29.964879 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-06 20:03:29.964898 | orchestrator | Sunday 06 July 2025 20:03:28 +0000 (0:00:00.322) 0:00:42.971 *********** 2025-07-06 20:03:29.964910 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:03:29.964924 | orchestrator | 2025-07-06 20:03:29.964943 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-06 20:03:29.964961 | orchestrator | Sunday 06 July 2025 20:03:28 +0000 (0:00:00.140) 0:00:43.111 *********** 2025-07-06 20:03:29.964999 | orchestrator | changed: [testbed-node-5] => { 2025-07-06 20:03:29.965024 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-06 20:03:29.965063 | orchestrator |  "ceph_osd_devices": { 2025-07-06 20:03:29.965119 | orchestrator |  "sdb": { 2025-07-06 20:03:29.965132 | orchestrator |  "osd_lvm_uuid": "fc1251bd-e592-50b3-b197-385f411a7339" 2025-07-06 20:03:29.965143 | orchestrator |  }, 2025-07-06 20:03:29.965153 | orchestrator |  "sdc": { 2025-07-06 20:03:29.965164 | orchestrator |  "osd_lvm_uuid": "b5f0fce0-432f-57fb-bebd-426658f60987" 2025-07-06 20:03:29.965175 | orchestrator |  } 2025-07-06 20:03:29.965186 | orchestrator |  }, 2025-07-06 20:03:29.965197 | orchestrator |  "lvm_volumes": [ 2025-07-06 20:03:29.965208 | orchestrator |  { 2025-07-06 20:03:29.965219 | orchestrator |  "data": "osd-block-fc1251bd-e592-50b3-b197-385f411a7339", 2025-07-06 20:03:29.965230 | orchestrator |  "data_vg": "ceph-fc1251bd-e592-50b3-b197-385f411a7339" 2025-07-06 20:03:29.965241 | orchestrator |  }, 2025-07-06 20:03:29.965251 | orchestrator |  { 2025-07-06 20:03:29.965262 | orchestrator |  "data": "osd-block-b5f0fce0-432f-57fb-bebd-426658f60987", 2025-07-06 20:03:29.965273 | orchestrator |  "data_vg": "ceph-b5f0fce0-432f-57fb-bebd-426658f60987" 2025-07-06 20:03:29.965284 | orchestrator |  } 2025-07-06 20:03:29.965294 | orchestrator |  ] 2025-07-06 20:03:29.965305 | orchestrator |  } 2025-07-06 20:03:29.965316 | orchestrator | } 2025-07-06 20:03:29.965327 | orchestrator | 2025-07-06 20:03:29.965338 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-06 20:03:29.965349 | orchestrator | Sunday 06 July 2025 20:03:28 +0000 (0:00:00.208) 0:00:43.320 *********** 2025-07-06 20:03:29.965360 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-06 20:03:29.965371 | orchestrator | 2025-07-06 20:03:29.965381 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:03:29.965393 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-06 20:03:29.965405 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-06 20:03:29.965416 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-06 20:03:29.965427 | orchestrator | 2025-07-06 20:03:29.965437 | orchestrator | 2025-07-06 20:03:29.965448 | orchestrator | 2025-07-06 20:03:29.965459 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:03:29.965469 | orchestrator | Sunday 06 July 2025 20:03:29 +0000 (0:00:00.986) 0:00:44.306 *********** 2025-07-06 20:03:29.965480 | orchestrator | =============================================================================== 2025-07-06 20:03:29.965491 | orchestrator | Write configuration file ------------------------------------------------ 4.35s 2025-07-06 20:03:29.965501 | orchestrator | Add known partitions to the list of available block devices ------------- 1.21s 2025-07-06 20:03:29.965512 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2025-07-06 20:03:29.965523 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2025-07-06 20:03:29.965533 | orchestrator | Get initial list of available block devices ----------------------------- 1.10s 2025-07-06 20:03:29.965544 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.05s 2025-07-06 20:03:29.965560 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2025-07-06 20:03:29.965571 | orchestrator | Add known links to the list of available block devices ------------------ 0.90s 2025-07-06 20:03:29.965582 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.74s 2025-07-06 20:03:29.965609 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.72s 2025-07-06 20:03:29.965621 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2025-07-06 20:03:29.965631 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-07-06 20:03:29.965642 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-07-06 20:03:29.965653 | orchestrator | Set WAL devices config data --------------------------------------------- 0.67s 2025-07-06 20:03:29.965676 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.67s 2025-07-06 20:03:30.272287 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-07-06 20:03:30.272413 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-07-06 20:03:30.272422 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-07-06 20:03:30.272429 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-07-06 20:03:30.272436 | orchestrator | Print configuration data ------------------------------------------------ 0.61s 2025-07-06 20:03:52.642299 | orchestrator | 2025-07-06 20:03:52 | INFO  | Task 917516fe-b2eb-4d87-873a-1c49dc84dc95 (sync inventory) is running in background. Output coming soon. 2025-07-06 20:04:10.120545 | orchestrator | 2025-07-06 20:03:53 | INFO  | Starting group_vars file reorganization 2025-07-06 20:04:10.120679 | orchestrator | 2025-07-06 20:03:53 | INFO  | Moved 0 file(s) to their respective directories 2025-07-06 20:04:10.120698 | orchestrator | 2025-07-06 20:03:53 | INFO  | Group_vars file reorganization completed 2025-07-06 20:04:10.120710 | orchestrator | 2025-07-06 20:03:55 | INFO  | Starting variable preparation from inventory 2025-07-06 20:04:10.120721 | orchestrator | 2025-07-06 20:03:56 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-07-06 20:04:10.120732 | orchestrator | 2025-07-06 20:03:56 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-07-06 20:04:10.120743 | orchestrator | 2025-07-06 20:03:56 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-07-06 20:04:10.120754 | orchestrator | 2025-07-06 20:03:56 | INFO  | 3 file(s) written, 6 host(s) processed 2025-07-06 20:04:10.120765 | orchestrator | 2025-07-06 20:03:56 | INFO  | Variable preparation completed 2025-07-06 20:04:10.120776 | orchestrator | 2025-07-06 20:03:57 | INFO  | Starting inventory overwrite handling 2025-07-06 20:04:10.120786 | orchestrator | 2025-07-06 20:03:57 | INFO  | Handling group overwrites in 99-overwrite 2025-07-06 20:04:10.120797 | orchestrator | 2025-07-06 20:03:57 | INFO  | Removing group frr:children from 60-generic 2025-07-06 20:04:10.120808 | orchestrator | 2025-07-06 20:03:58 | INFO  | Removing group storage:children from 50-kolla 2025-07-06 20:04:10.120819 | orchestrator | 2025-07-06 20:03:58 | INFO  | Removing group netbird:children from 50-infrastruture 2025-07-06 20:04:10.120830 | orchestrator | 2025-07-06 20:03:58 | INFO  | Removing group ceph-mds from 50-ceph 2025-07-06 20:04:10.120841 | orchestrator | 2025-07-06 20:03:58 | INFO  | Removing group ceph-rgw from 50-ceph 2025-07-06 20:04:10.120852 | orchestrator | 2025-07-06 20:03:58 | INFO  | Handling group overwrites in 20-roles 2025-07-06 20:04:10.120863 | orchestrator | 2025-07-06 20:03:58 | INFO  | Removing group k3s_node from 50-infrastruture 2025-07-06 20:04:10.120873 | orchestrator | 2025-07-06 20:03:58 | INFO  | Removed 6 group(s) in total 2025-07-06 20:04:10.120885 | orchestrator | 2025-07-06 20:03:58 | INFO  | Inventory overwrite handling completed 2025-07-06 20:04:10.120896 | orchestrator | 2025-07-06 20:03:58 | INFO  | Starting merge of inventory files 2025-07-06 20:04:10.120933 | orchestrator | 2025-07-06 20:03:58 | INFO  | Inventory files merged successfully 2025-07-06 20:04:10.120945 | orchestrator | 2025-07-06 20:04:02 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-07-06 20:04:10.120956 | orchestrator | 2025-07-06 20:04:09 | INFO  | Successfully wrote ClusterShell configuration 2025-07-06 20:04:10.120967 | orchestrator | [master c8b32c0] 2025-07-06-20-04 2025-07-06 20:04:10.121029 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-07-06 20:04:11.929821 | orchestrator | 2025-07-06 20:04:11 | INFO  | Task 52847a68-949e-4e12-8222-0c60a3fb30d5 (ceph-create-lvm-devices) was prepared for execution. 2025-07-06 20:04:11.929926 | orchestrator | 2025-07-06 20:04:11 | INFO  | It takes a moment until task 52847a68-949e-4e12-8222-0c60a3fb30d5 (ceph-create-lvm-devices) has been started and output is visible here. 2025-07-06 20:04:22.276248 | orchestrator | 2025-07-06 20:04:22.276369 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-06 20:04:22.276385 | orchestrator | 2025-07-06 20:04:22.276397 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 20:04:22.276409 | orchestrator | Sunday 06 July 2025 20:04:15 +0000 (0:00:00.278) 0:00:00.278 *********** 2025-07-06 20:04:22.276420 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:04:22.276431 | orchestrator | 2025-07-06 20:04:22.276442 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 20:04:22.276453 | orchestrator | Sunday 06 July 2025 20:04:15 +0000 (0:00:00.235) 0:00:00.514 *********** 2025-07-06 20:04:22.276464 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:04:22.276476 | orchestrator | 2025-07-06 20:04:22.276487 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.276498 | orchestrator | Sunday 06 July 2025 20:04:16 +0000 (0:00:00.205) 0:00:00.719 *********** 2025-07-06 20:04:22.276509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-06 20:04:22.276521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-06 20:04:22.276531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-06 20:04:22.276561 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-06 20:04:22.276573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-06 20:04:22.276584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-06 20:04:22.276598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-06 20:04:22.276617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-06 20:04:22.276636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-06 20:04:22.276654 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-06 20:04:22.276672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-06 20:04:22.276690 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-06 20:04:22.276707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-06 20:04:22.276726 | orchestrator | 2025-07-06 20:04:22.276744 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.276764 | orchestrator | Sunday 06 July 2025 20:04:16 +0000 (0:00:00.379) 0:00:01.099 *********** 2025-07-06 20:04:22.276784 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.276805 | orchestrator | 2025-07-06 20:04:22.276820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.276855 | orchestrator | Sunday 06 July 2025 20:04:16 +0000 (0:00:00.349) 0:00:01.448 *********** 2025-07-06 20:04:22.276868 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.276880 | orchestrator | 2025-07-06 20:04:22.276893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.276906 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.178) 0:00:01.626 *********** 2025-07-06 20:04:22.276946 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.276993 | orchestrator | 2025-07-06 20:04:22.277007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.277021 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.168) 0:00:01.795 *********** 2025-07-06 20:04:22.277033 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.277045 | orchestrator | 2025-07-06 20:04:22.277058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.277070 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.175) 0:00:01.971 *********** 2025-07-06 20:04:22.277083 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.277095 | orchestrator | 2025-07-06 20:04:22.277108 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.277122 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.167) 0:00:02.138 *********** 2025-07-06 20:04:22.277135 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.277146 | orchestrator | 2025-07-06 20:04:22.277157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.277168 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.188) 0:00:02.326 *********** 2025-07-06 20:04:22.277178 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.277189 | orchestrator | 2025-07-06 20:04:22.277201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.277212 | orchestrator | Sunday 06 July 2025 20:04:17 +0000 (0:00:00.181) 0:00:02.508 *********** 2025-07-06 20:04:22.277223 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.277234 | orchestrator | 2025-07-06 20:04:22.277245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.277255 | orchestrator | Sunday 06 July 2025 20:04:18 +0000 (0:00:00.180) 0:00:02.689 *********** 2025-07-06 20:04:22.277266 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838) 2025-07-06 20:04:22.277278 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838) 2025-07-06 20:04:22.277289 | orchestrator | 2025-07-06 20:04:22.277300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.277311 | orchestrator | Sunday 06 July 2025 20:04:18 +0000 (0:00:00.376) 0:00:03.065 *********** 2025-07-06 20:04:22.277350 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b) 2025-07-06 20:04:22.277363 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b) 2025-07-06 20:04:22.277374 | orchestrator | 2025-07-06 20:04:22.277385 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.277396 | orchestrator | Sunday 06 July 2025 20:04:18 +0000 (0:00:00.368) 0:00:03.434 *********** 2025-07-06 20:04:22.277407 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111) 2025-07-06 20:04:22.277418 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111) 2025-07-06 20:04:22.277429 | orchestrator | 2025-07-06 20:04:22.277439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.277450 | orchestrator | Sunday 06 July 2025 20:04:19 +0000 (0:00:00.498) 0:00:03.932 *********** 2025-07-06 20:04:22.277461 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd) 2025-07-06 20:04:22.277472 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd) 2025-07-06 20:04:22.277495 | orchestrator | 2025-07-06 20:04:22.277506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:22.277517 | orchestrator | Sunday 06 July 2025 20:04:19 +0000 (0:00:00.528) 0:00:04.460 *********** 2025-07-06 20:04:22.277527 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 20:04:22.277538 | orchestrator | 2025-07-06 20:04:22.277549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:22.277560 | orchestrator | Sunday 06 July 2025 20:04:20 +0000 (0:00:00.565) 0:00:05.026 *********** 2025-07-06 20:04:22.277571 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-06 20:04:22.277582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-06 20:04:22.277593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-06 20:04:22.277603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-06 20:04:22.277614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-06 20:04:22.277625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-06 20:04:22.277635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-06 20:04:22.277646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-06 20:04:22.277657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-06 20:04:22.277668 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-06 20:04:22.277678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-06 20:04:22.277689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-06 20:04:22.277700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-06 20:04:22.277711 | orchestrator | 2025-07-06 20:04:22.277721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:22.277732 | orchestrator | Sunday 06 July 2025 20:04:20 +0000 (0:00:00.368) 0:00:05.395 *********** 2025-07-06 20:04:22.277743 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.277754 | orchestrator | 2025-07-06 20:04:22.277772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:22.277791 | orchestrator | Sunday 06 July 2025 20:04:20 +0000 (0:00:00.197) 0:00:05.593 *********** 2025-07-06 20:04:22.277808 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.277826 | orchestrator | 2025-07-06 20:04:22.277845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:22.277863 | orchestrator | Sunday 06 July 2025 20:04:21 +0000 (0:00:00.202) 0:00:05.795 *********** 2025-07-06 20:04:22.277881 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.277902 | orchestrator | 2025-07-06 20:04:22.277920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:22.277935 | orchestrator | Sunday 06 July 2025 20:04:21 +0000 (0:00:00.184) 0:00:05.980 *********** 2025-07-06 20:04:22.277946 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.277997 | orchestrator | 2025-07-06 20:04:22.278010 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:22.278091 | orchestrator | Sunday 06 July 2025 20:04:21 +0000 (0:00:00.189) 0:00:06.170 *********** 2025-07-06 20:04:22.278103 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.278114 | orchestrator | 2025-07-06 20:04:22.278125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:22.278137 | orchestrator | Sunday 06 July 2025 20:04:21 +0000 (0:00:00.175) 0:00:06.346 *********** 2025-07-06 20:04:22.279078 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.279104 | orchestrator | 2025-07-06 20:04:22.279116 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:22.279127 | orchestrator | Sunday 06 July 2025 20:04:21 +0000 (0:00:00.180) 0:00:06.527 *********** 2025-07-06 20:04:22.279139 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:22.279150 | orchestrator | 2025-07-06 20:04:22.279162 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:22.279174 | orchestrator | Sunday 06 July 2025 20:04:22 +0000 (0:00:00.186) 0:00:06.713 *********** 2025-07-06 20:04:22.279197 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.979343 | orchestrator | 2025-07-06 20:04:29.979458 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:29.979475 | orchestrator | Sunday 06 July 2025 20:04:22 +0000 (0:00:00.179) 0:00:06.893 *********** 2025-07-06 20:04:29.979487 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-06 20:04:29.979500 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-07-06 20:04:29.979511 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-07-06 20:04:29.979522 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-07-06 20:04:29.979533 | orchestrator | 2025-07-06 20:04:29.979544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:29.979556 | orchestrator | Sunday 06 July 2025 20:04:23 +0000 (0:00:00.918) 0:00:07.812 *********** 2025-07-06 20:04:29.979567 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.979578 | orchestrator | 2025-07-06 20:04:29.979589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:29.979600 | orchestrator | Sunday 06 July 2025 20:04:23 +0000 (0:00:00.181) 0:00:07.994 *********** 2025-07-06 20:04:29.979611 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.979622 | orchestrator | 2025-07-06 20:04:29.979633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:29.979644 | orchestrator | Sunday 06 July 2025 20:04:23 +0000 (0:00:00.180) 0:00:08.174 *********** 2025-07-06 20:04:29.979655 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.979665 | orchestrator | 2025-07-06 20:04:29.979676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:29.979687 | orchestrator | Sunday 06 July 2025 20:04:23 +0000 (0:00:00.182) 0:00:08.357 *********** 2025-07-06 20:04:29.979698 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.979709 | orchestrator | 2025-07-06 20:04:29.979720 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-06 20:04:29.979750 | orchestrator | Sunday 06 July 2025 20:04:23 +0000 (0:00:00.167) 0:00:08.524 *********** 2025-07-06 20:04:29.979762 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.979772 | orchestrator | 2025-07-06 20:04:29.979783 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-06 20:04:29.979794 | orchestrator | Sunday 06 July 2025 20:04:24 +0000 (0:00:00.124) 0:00:08.648 *********** 2025-07-06 20:04:29.979806 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'}}) 2025-07-06 20:04:29.979817 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'}}) 2025-07-06 20:04:29.979828 | orchestrator | 2025-07-06 20:04:29.979839 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-06 20:04:29.979850 | orchestrator | Sunday 06 July 2025 20:04:24 +0000 (0:00:00.173) 0:00:08.822 *********** 2025-07-06 20:04:29.979862 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'}) 2025-07-06 20:04:29.979876 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'}) 2025-07-06 20:04:29.979889 | orchestrator | 2025-07-06 20:04:29.979924 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-06 20:04:29.979986 | orchestrator | Sunday 06 July 2025 20:04:26 +0000 (0:00:01.991) 0:00:10.814 *********** 2025-07-06 20:04:29.980010 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:29.980033 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:29.980046 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980058 | orchestrator | 2025-07-06 20:04:29.980071 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-06 20:04:29.980083 | orchestrator | Sunday 06 July 2025 20:04:26 +0000 (0:00:00.143) 0:00:10.957 *********** 2025-07-06 20:04:29.980096 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'}) 2025-07-06 20:04:29.980109 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'}) 2025-07-06 20:04:29.980121 | orchestrator | 2025-07-06 20:04:29.980132 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-06 20:04:29.980143 | orchestrator | Sunday 06 July 2025 20:04:27 +0000 (0:00:01.498) 0:00:12.455 *********** 2025-07-06 20:04:29.980154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:29.980164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:29.980175 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980186 | orchestrator | 2025-07-06 20:04:29.980196 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-06 20:04:29.980207 | orchestrator | Sunday 06 July 2025 20:04:27 +0000 (0:00:00.162) 0:00:12.618 *********** 2025-07-06 20:04:29.980218 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980228 | orchestrator | 2025-07-06 20:04:29.980245 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-06 20:04:29.980274 | orchestrator | Sunday 06 July 2025 20:04:28 +0000 (0:00:00.139) 0:00:12.758 *********** 2025-07-06 20:04:29.980285 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:29.980296 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:29.980307 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980318 | orchestrator | 2025-07-06 20:04:29.980329 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-06 20:04:29.980339 | orchestrator | Sunday 06 July 2025 20:04:28 +0000 (0:00:00.345) 0:00:13.104 *********** 2025-07-06 20:04:29.980350 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980360 | orchestrator | 2025-07-06 20:04:29.980371 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-06 20:04:29.980381 | orchestrator | Sunday 06 July 2025 20:04:28 +0000 (0:00:00.143) 0:00:13.247 *********** 2025-07-06 20:04:29.980392 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:29.980403 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:29.980414 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980424 | orchestrator | 2025-07-06 20:04:29.980435 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-06 20:04:29.980454 | orchestrator | Sunday 06 July 2025 20:04:28 +0000 (0:00:00.154) 0:00:13.402 *********** 2025-07-06 20:04:29.980465 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980476 | orchestrator | 2025-07-06 20:04:29.980486 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-06 20:04:29.980497 | orchestrator | Sunday 06 July 2025 20:04:28 +0000 (0:00:00.155) 0:00:13.558 *********** 2025-07-06 20:04:29.980508 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:29.980519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:29.980529 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980540 | orchestrator | 2025-07-06 20:04:29.980551 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-06 20:04:29.980561 | orchestrator | Sunday 06 July 2025 20:04:29 +0000 (0:00:00.156) 0:00:13.715 *********** 2025-07-06 20:04:29.980572 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:04:29.980583 | orchestrator | 2025-07-06 20:04:29.980593 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-06 20:04:29.980604 | orchestrator | Sunday 06 July 2025 20:04:29 +0000 (0:00:00.140) 0:00:13.856 *********** 2025-07-06 20:04:29.980614 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:29.980625 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:29.980636 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980647 | orchestrator | 2025-07-06 20:04:29.980657 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-06 20:04:29.980668 | orchestrator | Sunday 06 July 2025 20:04:29 +0000 (0:00:00.151) 0:00:14.007 *********** 2025-07-06 20:04:29.980679 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:29.980689 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:29.980700 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980711 | orchestrator | 2025-07-06 20:04:29.980721 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-06 20:04:29.980732 | orchestrator | Sunday 06 July 2025 20:04:29 +0000 (0:00:00.159) 0:00:14.166 *********** 2025-07-06 20:04:29.980743 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:29.980753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:29.980764 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980775 | orchestrator | 2025-07-06 20:04:29.980785 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-06 20:04:29.980796 | orchestrator | Sunday 06 July 2025 20:04:29 +0000 (0:00:00.155) 0:00:14.321 *********** 2025-07-06 20:04:29.980807 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980817 | orchestrator | 2025-07-06 20:04:29.980828 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-06 20:04:29.980839 | orchestrator | Sunday 06 July 2025 20:04:29 +0000 (0:00:00.130) 0:00:14.451 *********** 2025-07-06 20:04:29.980850 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:29.980860 | orchestrator | 2025-07-06 20:04:29.980876 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-06 20:04:35.901873 | orchestrator | Sunday 06 July 2025 20:04:29 +0000 (0:00:00.143) 0:00:14.594 *********** 2025-07-06 20:04:35.901989 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902001 | orchestrator | 2025-07-06 20:04:35.902010 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-06 20:04:35.902060 | orchestrator | Sunday 06 July 2025 20:04:30 +0000 (0:00:00.142) 0:00:14.737 *********** 2025-07-06 20:04:35.902068 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 20:04:35.902076 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-06 20:04:35.902084 | orchestrator | } 2025-07-06 20:04:35.902091 | orchestrator | 2025-07-06 20:04:35.902098 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-06 20:04:35.902105 | orchestrator | Sunday 06 July 2025 20:04:30 +0000 (0:00:00.322) 0:00:15.059 *********** 2025-07-06 20:04:35.902112 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 20:04:35.902119 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-06 20:04:35.902126 | orchestrator | } 2025-07-06 20:04:35.902133 | orchestrator | 2025-07-06 20:04:35.902140 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-06 20:04:35.902147 | orchestrator | Sunday 06 July 2025 20:04:30 +0000 (0:00:00.146) 0:00:15.206 *********** 2025-07-06 20:04:35.902153 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 20:04:35.902160 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-06 20:04:35.902167 | orchestrator | } 2025-07-06 20:04:35.902174 | orchestrator | 2025-07-06 20:04:35.902181 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-06 20:04:35.902188 | orchestrator | Sunday 06 July 2025 20:04:30 +0000 (0:00:00.130) 0:00:15.336 *********** 2025-07-06 20:04:35.902195 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:04:35.902202 | orchestrator | 2025-07-06 20:04:35.902209 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-06 20:04:35.902215 | orchestrator | Sunday 06 July 2025 20:04:31 +0000 (0:00:00.679) 0:00:16.016 *********** 2025-07-06 20:04:35.902222 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:04:35.902229 | orchestrator | 2025-07-06 20:04:35.902236 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-06 20:04:35.902243 | orchestrator | Sunday 06 July 2025 20:04:31 +0000 (0:00:00.529) 0:00:16.546 *********** 2025-07-06 20:04:35.902250 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:04:35.902257 | orchestrator | 2025-07-06 20:04:35.902264 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-06 20:04:35.902270 | orchestrator | Sunday 06 July 2025 20:04:32 +0000 (0:00:00.570) 0:00:17.116 *********** 2025-07-06 20:04:35.902277 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:04:35.902284 | orchestrator | 2025-07-06 20:04:35.902291 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-06 20:04:35.902297 | orchestrator | Sunday 06 July 2025 20:04:32 +0000 (0:00:00.152) 0:00:17.269 *********** 2025-07-06 20:04:35.902304 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902311 | orchestrator | 2025-07-06 20:04:35.902318 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-06 20:04:35.902324 | orchestrator | Sunday 06 July 2025 20:04:32 +0000 (0:00:00.119) 0:00:17.389 *********** 2025-07-06 20:04:35.902331 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902338 | orchestrator | 2025-07-06 20:04:35.902345 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-06 20:04:35.902352 | orchestrator | Sunday 06 July 2025 20:04:32 +0000 (0:00:00.122) 0:00:17.512 *********** 2025-07-06 20:04:35.902358 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 20:04:35.902365 | orchestrator |  "vgs_report": { 2025-07-06 20:04:35.902372 | orchestrator |  "vg": [] 2025-07-06 20:04:35.902379 | orchestrator |  } 2025-07-06 20:04:35.902386 | orchestrator | } 2025-07-06 20:04:35.902393 | orchestrator | 2025-07-06 20:04:35.902399 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-06 20:04:35.902424 | orchestrator | Sunday 06 July 2025 20:04:33 +0000 (0:00:00.151) 0:00:17.664 *********** 2025-07-06 20:04:35.902432 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902441 | orchestrator | 2025-07-06 20:04:35.902462 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-06 20:04:35.902471 | orchestrator | Sunday 06 July 2025 20:04:33 +0000 (0:00:00.131) 0:00:17.795 *********** 2025-07-06 20:04:35.902479 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902487 | orchestrator | 2025-07-06 20:04:35.902495 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-06 20:04:35.902503 | orchestrator | Sunday 06 July 2025 20:04:33 +0000 (0:00:00.131) 0:00:17.927 *********** 2025-07-06 20:04:35.902511 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902519 | orchestrator | 2025-07-06 20:04:35.902527 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-06 20:04:35.902535 | orchestrator | Sunday 06 July 2025 20:04:33 +0000 (0:00:00.308) 0:00:18.235 *********** 2025-07-06 20:04:35.902543 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902550 | orchestrator | 2025-07-06 20:04:35.902559 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-06 20:04:35.902566 | orchestrator | Sunday 06 July 2025 20:04:33 +0000 (0:00:00.128) 0:00:18.363 *********** 2025-07-06 20:04:35.902574 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902581 | orchestrator | 2025-07-06 20:04:35.902588 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-06 20:04:35.902594 | orchestrator | Sunday 06 July 2025 20:04:33 +0000 (0:00:00.120) 0:00:18.483 *********** 2025-07-06 20:04:35.902601 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902608 | orchestrator | 2025-07-06 20:04:35.902614 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-06 20:04:35.902621 | orchestrator | Sunday 06 July 2025 20:04:33 +0000 (0:00:00.130) 0:00:18.614 *********** 2025-07-06 20:04:35.902627 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902634 | orchestrator | 2025-07-06 20:04:35.902641 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-06 20:04:35.902648 | orchestrator | Sunday 06 July 2025 20:04:34 +0000 (0:00:00.134) 0:00:18.748 *********** 2025-07-06 20:04:35.902654 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902661 | orchestrator | 2025-07-06 20:04:35.902674 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-06 20:04:35.902694 | orchestrator | Sunday 06 July 2025 20:04:34 +0000 (0:00:00.122) 0:00:18.870 *********** 2025-07-06 20:04:35.902701 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902707 | orchestrator | 2025-07-06 20:04:35.902714 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-06 20:04:35.902721 | orchestrator | Sunday 06 July 2025 20:04:34 +0000 (0:00:00.120) 0:00:18.990 *********** 2025-07-06 20:04:35.902727 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902734 | orchestrator | 2025-07-06 20:04:35.902740 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-06 20:04:35.902747 | orchestrator | Sunday 06 July 2025 20:04:34 +0000 (0:00:00.110) 0:00:19.101 *********** 2025-07-06 20:04:35.902753 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902760 | orchestrator | 2025-07-06 20:04:35.902767 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-06 20:04:35.902773 | orchestrator | Sunday 06 July 2025 20:04:34 +0000 (0:00:00.115) 0:00:19.217 *********** 2025-07-06 20:04:35.902780 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902786 | orchestrator | 2025-07-06 20:04:35.902793 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-06 20:04:35.902799 | orchestrator | Sunday 06 July 2025 20:04:34 +0000 (0:00:00.101) 0:00:19.319 *********** 2025-07-06 20:04:35.902806 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902812 | orchestrator | 2025-07-06 20:04:35.902819 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-06 20:04:35.902832 | orchestrator | Sunday 06 July 2025 20:04:34 +0000 (0:00:00.123) 0:00:19.442 *********** 2025-07-06 20:04:35.902838 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902845 | orchestrator | 2025-07-06 20:04:35.902852 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-06 20:04:35.902858 | orchestrator | Sunday 06 July 2025 20:04:34 +0000 (0:00:00.126) 0:00:19.568 *********** 2025-07-06 20:04:35.902866 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:35.902874 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:35.902881 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902888 | orchestrator | 2025-07-06 20:04:35.902894 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-06 20:04:35.902901 | orchestrator | Sunday 06 July 2025 20:04:35 +0000 (0:00:00.141) 0:00:19.709 *********** 2025-07-06 20:04:35.902907 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:35.902914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:35.902921 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902943 | orchestrator | 2025-07-06 20:04:35.902950 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-06 20:04:35.902957 | orchestrator | Sunday 06 July 2025 20:04:35 +0000 (0:00:00.257) 0:00:19.967 *********** 2025-07-06 20:04:35.902964 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:35.902970 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:35.902977 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.902984 | orchestrator | 2025-07-06 20:04:35.902990 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-06 20:04:35.902997 | orchestrator | Sunday 06 July 2025 20:04:35 +0000 (0:00:00.147) 0:00:20.115 *********** 2025-07-06 20:04:35.903004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:35.903010 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:35.903017 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.903024 | orchestrator | 2025-07-06 20:04:35.903030 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-06 20:04:35.903037 | orchestrator | Sunday 06 July 2025 20:04:35 +0000 (0:00:00.126) 0:00:20.242 *********** 2025-07-06 20:04:35.903044 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:35.903051 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:35.903057 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:35.903064 | orchestrator | 2025-07-06 20:04:35.903071 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-06 20:04:35.903077 | orchestrator | Sunday 06 July 2025 20:04:35 +0000 (0:00:00.136) 0:00:20.378 *********** 2025-07-06 20:04:35.903088 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:35.903105 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:40.875175 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:40.875265 | orchestrator | 2025-07-06 20:04:40.875276 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-06 20:04:40.875285 | orchestrator | Sunday 06 July 2025 20:04:35 +0000 (0:00:00.139) 0:00:20.518 *********** 2025-07-06 20:04:40.875292 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:40.875301 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:40.875307 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:40.875314 | orchestrator | 2025-07-06 20:04:40.875320 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-06 20:04:40.875327 | orchestrator | Sunday 06 July 2025 20:04:36 +0000 (0:00:00.143) 0:00:20.661 *********** 2025-07-06 20:04:40.875333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:40.875339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:40.875346 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:40.875352 | orchestrator | 2025-07-06 20:04:40.875358 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-06 20:04:40.875364 | orchestrator | Sunday 06 July 2025 20:04:36 +0000 (0:00:00.138) 0:00:20.800 *********** 2025-07-06 20:04:40.875371 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:04:40.875378 | orchestrator | 2025-07-06 20:04:40.875384 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-06 20:04:40.875390 | orchestrator | Sunday 06 July 2025 20:04:36 +0000 (0:00:00.650) 0:00:21.450 *********** 2025-07-06 20:04:40.875397 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:04:40.875403 | orchestrator | 2025-07-06 20:04:40.875409 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-06 20:04:40.875415 | orchestrator | Sunday 06 July 2025 20:04:37 +0000 (0:00:00.533) 0:00:21.984 *********** 2025-07-06 20:04:40.875422 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:04:40.875428 | orchestrator | 2025-07-06 20:04:40.875434 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-06 20:04:40.875440 | orchestrator | Sunday 06 July 2025 20:04:37 +0000 (0:00:00.136) 0:00:22.121 *********** 2025-07-06 20:04:40.875447 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'vg_name': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'}) 2025-07-06 20:04:40.875454 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'vg_name': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'}) 2025-07-06 20:04:40.875460 | orchestrator | 2025-07-06 20:04:40.875467 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-06 20:04:40.875473 | orchestrator | Sunday 06 July 2025 20:04:37 +0000 (0:00:00.173) 0:00:22.295 *********** 2025-07-06 20:04:40.875479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:40.875485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:40.875492 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:40.875498 | orchestrator | 2025-07-06 20:04:40.875504 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-06 20:04:40.875528 | orchestrator | Sunday 06 July 2025 20:04:37 +0000 (0:00:00.136) 0:00:22.432 *********** 2025-07-06 20:04:40.875535 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:40.875541 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:40.875547 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:40.875554 | orchestrator | 2025-07-06 20:04:40.875560 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-06 20:04:40.875566 | orchestrator | Sunday 06 July 2025 20:04:38 +0000 (0:00:00.265) 0:00:22.697 *********** 2025-07-06 20:04:40.875572 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'})  2025-07-06 20:04:40.875579 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'})  2025-07-06 20:04:40.875585 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:04:40.875591 | orchestrator | 2025-07-06 20:04:40.875597 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-06 20:04:40.875604 | orchestrator | Sunday 06 July 2025 20:04:38 +0000 (0:00:00.153) 0:00:22.851 *********** 2025-07-06 20:04:40.875610 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 20:04:40.875617 | orchestrator |  "lvm_report": { 2025-07-06 20:04:40.875623 | orchestrator |  "lv": [ 2025-07-06 20:04:40.875630 | orchestrator |  { 2025-07-06 20:04:40.875650 | orchestrator |  "lv_name": "osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15", 2025-07-06 20:04:40.875658 | orchestrator |  "vg_name": "ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15" 2025-07-06 20:04:40.875664 | orchestrator |  }, 2025-07-06 20:04:40.875670 | orchestrator |  { 2025-07-06 20:04:40.875676 | orchestrator |  "lv_name": "osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09", 2025-07-06 20:04:40.875682 | orchestrator |  "vg_name": "ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09" 2025-07-06 20:04:40.875688 | orchestrator |  } 2025-07-06 20:04:40.875695 | orchestrator |  ], 2025-07-06 20:04:40.875702 | orchestrator |  "pv": [ 2025-07-06 20:04:40.875709 | orchestrator |  { 2025-07-06 20:04:40.875716 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-06 20:04:40.875724 | orchestrator |  "vg_name": "ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09" 2025-07-06 20:04:40.875731 | orchestrator |  }, 2025-07-06 20:04:40.875738 | orchestrator |  { 2025-07-06 20:04:40.875745 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-06 20:04:40.875752 | orchestrator |  "vg_name": "ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15" 2025-07-06 20:04:40.875759 | orchestrator |  } 2025-07-06 20:04:40.875767 | orchestrator |  ] 2025-07-06 20:04:40.875774 | orchestrator |  } 2025-07-06 20:04:40.875781 | orchestrator | } 2025-07-06 20:04:40.875789 | orchestrator | 2025-07-06 20:04:40.875796 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-06 20:04:40.875803 | orchestrator | 2025-07-06 20:04:40.875824 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 20:04:40.875832 | orchestrator | Sunday 06 July 2025 20:04:38 +0000 (0:00:00.271) 0:00:23.123 *********** 2025-07-06 20:04:40.875840 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-06 20:04:40.875847 | orchestrator | 2025-07-06 20:04:40.875854 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 20:04:40.875861 | orchestrator | Sunday 06 July 2025 20:04:38 +0000 (0:00:00.223) 0:00:23.346 *********** 2025-07-06 20:04:40.875869 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:04:40.875876 | orchestrator | 2025-07-06 20:04:40.875884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:40.875897 | orchestrator | Sunday 06 July 2025 20:04:38 +0000 (0:00:00.209) 0:00:23.556 *********** 2025-07-06 20:04:40.875905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-06 20:04:40.875912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-06 20:04:40.875944 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-06 20:04:40.875952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-06 20:04:40.875959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-06 20:04:40.875967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-06 20:04:40.875974 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-06 20:04:40.875981 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-06 20:04:40.875988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-06 20:04:40.875995 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-06 20:04:40.876003 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-06 20:04:40.876010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-06 20:04:40.876017 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-06 20:04:40.876024 | orchestrator | 2025-07-06 20:04:40.876030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:40.876038 | orchestrator | Sunday 06 July 2025 20:04:39 +0000 (0:00:00.421) 0:00:23.977 *********** 2025-07-06 20:04:40.876045 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:40.876052 | orchestrator | 2025-07-06 20:04:40.876059 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:40.876066 | orchestrator | Sunday 06 July 2025 20:04:39 +0000 (0:00:00.173) 0:00:24.151 *********** 2025-07-06 20:04:40.876073 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:40.876080 | orchestrator | 2025-07-06 20:04:40.876087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:40.876093 | orchestrator | Sunday 06 July 2025 20:04:39 +0000 (0:00:00.158) 0:00:24.309 *********** 2025-07-06 20:04:40.876099 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:40.876105 | orchestrator | 2025-07-06 20:04:40.876111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:40.876117 | orchestrator | Sunday 06 July 2025 20:04:39 +0000 (0:00:00.176) 0:00:24.486 *********** 2025-07-06 20:04:40.876123 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:40.876129 | orchestrator | 2025-07-06 20:04:40.876135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:40.876141 | orchestrator | Sunday 06 July 2025 20:04:40 +0000 (0:00:00.461) 0:00:24.947 *********** 2025-07-06 20:04:40.876148 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:40.876154 | orchestrator | 2025-07-06 20:04:40.876160 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:40.876166 | orchestrator | Sunday 06 July 2025 20:04:40 +0000 (0:00:00.185) 0:00:25.133 *********** 2025-07-06 20:04:40.876175 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:40.876181 | orchestrator | 2025-07-06 20:04:40.876188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:40.876194 | orchestrator | Sunday 06 July 2025 20:04:40 +0000 (0:00:00.175) 0:00:25.309 *********** 2025-07-06 20:04:40.876200 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:40.876206 | orchestrator | 2025-07-06 20:04:40.876216 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:50.828812 | orchestrator | Sunday 06 July 2025 20:04:40 +0000 (0:00:00.183) 0:00:25.492 *********** 2025-07-06 20:04:50.828992 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.829013 | orchestrator | 2025-07-06 20:04:50.829026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:50.829038 | orchestrator | Sunday 06 July 2025 20:04:41 +0000 (0:00:00.191) 0:00:25.683 *********** 2025-07-06 20:04:50.829049 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8) 2025-07-06 20:04:50.829061 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8) 2025-07-06 20:04:50.829072 | orchestrator | 2025-07-06 20:04:50.829085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:50.829096 | orchestrator | Sunday 06 July 2025 20:04:41 +0000 (0:00:00.360) 0:00:26.044 *********** 2025-07-06 20:04:50.829107 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719) 2025-07-06 20:04:50.829118 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719) 2025-07-06 20:04:50.829129 | orchestrator | 2025-07-06 20:04:50.829139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:50.829150 | orchestrator | Sunday 06 July 2025 20:04:41 +0000 (0:00:00.372) 0:00:26.417 *********** 2025-07-06 20:04:50.829161 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929) 2025-07-06 20:04:50.829172 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929) 2025-07-06 20:04:50.829182 | orchestrator | 2025-07-06 20:04:50.829193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:50.829204 | orchestrator | Sunday 06 July 2025 20:04:42 +0000 (0:00:00.384) 0:00:26.802 *********** 2025-07-06 20:04:50.829215 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48) 2025-07-06 20:04:50.829226 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48) 2025-07-06 20:04:50.829236 | orchestrator | 2025-07-06 20:04:50.829247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:04:50.829258 | orchestrator | Sunday 06 July 2025 20:04:42 +0000 (0:00:00.378) 0:00:27.180 *********** 2025-07-06 20:04:50.829269 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 20:04:50.829280 | orchestrator | 2025-07-06 20:04:50.829291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.829301 | orchestrator | Sunday 06 July 2025 20:04:42 +0000 (0:00:00.290) 0:00:27.471 *********** 2025-07-06 20:04:50.829312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-06 20:04:50.829324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-06 20:04:50.829337 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-06 20:04:50.829350 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-06 20:04:50.829363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-06 20:04:50.829376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-06 20:04:50.829389 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-06 20:04:50.829401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-06 20:04:50.829414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-06 20:04:50.829427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-06 20:04:50.829467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-06 20:04:50.829480 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-06 20:04:50.829493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-06 20:04:50.829505 | orchestrator | 2025-07-06 20:04:50.829518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.829531 | orchestrator | Sunday 06 July 2025 20:04:43 +0000 (0:00:00.506) 0:00:27.977 *********** 2025-07-06 20:04:50.829544 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.829557 | orchestrator | 2025-07-06 20:04:50.829570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.829583 | orchestrator | Sunday 06 July 2025 20:04:43 +0000 (0:00:00.181) 0:00:28.158 *********** 2025-07-06 20:04:50.829595 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.829608 | orchestrator | 2025-07-06 20:04:50.829621 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.829647 | orchestrator | Sunday 06 July 2025 20:04:43 +0000 (0:00:00.209) 0:00:28.368 *********** 2025-07-06 20:04:50.829660 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.829673 | orchestrator | 2025-07-06 20:04:50.829685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.829698 | orchestrator | Sunday 06 July 2025 20:04:43 +0000 (0:00:00.204) 0:00:28.572 *********** 2025-07-06 20:04:50.829712 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.829725 | orchestrator | 2025-07-06 20:04:50.829755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.829767 | orchestrator | Sunday 06 July 2025 20:04:44 +0000 (0:00:00.199) 0:00:28.772 *********** 2025-07-06 20:04:50.829778 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.829789 | orchestrator | 2025-07-06 20:04:50.829799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.829810 | orchestrator | Sunday 06 July 2025 20:04:44 +0000 (0:00:00.207) 0:00:28.980 *********** 2025-07-06 20:04:50.829821 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.829832 | orchestrator | 2025-07-06 20:04:50.829843 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.829854 | orchestrator | Sunday 06 July 2025 20:04:44 +0000 (0:00:00.207) 0:00:29.187 *********** 2025-07-06 20:04:50.829864 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.829875 | orchestrator | 2025-07-06 20:04:50.829886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.829922 | orchestrator | Sunday 06 July 2025 20:04:44 +0000 (0:00:00.200) 0:00:29.388 *********** 2025-07-06 20:04:50.829933 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.829944 | orchestrator | 2025-07-06 20:04:50.829955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.829966 | orchestrator | Sunday 06 July 2025 20:04:44 +0000 (0:00:00.210) 0:00:29.598 *********** 2025-07-06 20:04:50.829977 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-06 20:04:50.829988 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-07-06 20:04:50.829999 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-07-06 20:04:50.830010 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-07-06 20:04:50.830072 | orchestrator | 2025-07-06 20:04:50.830084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.830095 | orchestrator | Sunday 06 July 2025 20:04:45 +0000 (0:00:00.815) 0:00:30.414 *********** 2025-07-06 20:04:50.830106 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.830117 | orchestrator | 2025-07-06 20:04:50.830127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.830138 | orchestrator | Sunday 06 July 2025 20:04:46 +0000 (0:00:00.220) 0:00:30.634 *********** 2025-07-06 20:04:50.830149 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.830172 | orchestrator | 2025-07-06 20:04:50.830183 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.830194 | orchestrator | Sunday 06 July 2025 20:04:46 +0000 (0:00:00.215) 0:00:30.850 *********** 2025-07-06 20:04:50.830205 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.830216 | orchestrator | 2025-07-06 20:04:50.830227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:04:50.830238 | orchestrator | Sunday 06 July 2025 20:04:46 +0000 (0:00:00.632) 0:00:31.482 *********** 2025-07-06 20:04:50.830249 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.830260 | orchestrator | 2025-07-06 20:04:50.830271 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-06 20:04:50.830281 | orchestrator | Sunday 06 July 2025 20:04:47 +0000 (0:00:00.193) 0:00:31.676 *********** 2025-07-06 20:04:50.830292 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.830303 | orchestrator | 2025-07-06 20:04:50.830314 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-06 20:04:50.830325 | orchestrator | Sunday 06 July 2025 20:04:47 +0000 (0:00:00.136) 0:00:31.812 *********** 2025-07-06 20:04:50.830335 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '31ad454b-c5b7-54ad-acab-5839a456146b'}}) 2025-07-06 20:04:50.830347 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2eb0e424-9f58-550c-b8cf-76c1b52e517a'}}) 2025-07-06 20:04:50.830358 | orchestrator | 2025-07-06 20:04:50.830368 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-06 20:04:50.830379 | orchestrator | Sunday 06 July 2025 20:04:47 +0000 (0:00:00.190) 0:00:32.003 *********** 2025-07-06 20:04:50.830392 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'}) 2025-07-06 20:04:50.830403 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'}) 2025-07-06 20:04:50.830414 | orchestrator | 2025-07-06 20:04:50.830425 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-06 20:04:50.830436 | orchestrator | Sunday 06 July 2025 20:04:49 +0000 (0:00:01.922) 0:00:33.926 *********** 2025-07-06 20:04:50.830447 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:04:50.830459 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:04:50.830470 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:50.830480 | orchestrator | 2025-07-06 20:04:50.830491 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-06 20:04:50.830502 | orchestrator | Sunday 06 July 2025 20:04:49 +0000 (0:00:00.154) 0:00:34.080 *********** 2025-07-06 20:04:50.830514 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'}) 2025-07-06 20:04:50.830525 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'}) 2025-07-06 20:04:50.830536 | orchestrator | 2025-07-06 20:04:50.830554 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-06 20:04:56.247668 | orchestrator | Sunday 06 July 2025 20:04:50 +0000 (0:00:01.361) 0:00:35.441 *********** 2025-07-06 20:04:56.247782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:04:56.247800 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:04:56.247835 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.247847 | orchestrator | 2025-07-06 20:04:56.247859 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-06 20:04:56.247871 | orchestrator | Sunday 06 July 2025 20:04:50 +0000 (0:00:00.144) 0:00:35.586 *********** 2025-07-06 20:04:56.247882 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.247930 | orchestrator | 2025-07-06 20:04:56.247964 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-06 20:04:56.247984 | orchestrator | Sunday 06 July 2025 20:04:51 +0000 (0:00:00.129) 0:00:35.716 *********** 2025-07-06 20:04:56.248002 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:04:56.248020 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:04:56.248039 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248059 | orchestrator | 2025-07-06 20:04:56.248078 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-06 20:04:56.248097 | orchestrator | Sunday 06 July 2025 20:04:51 +0000 (0:00:00.142) 0:00:35.858 *********** 2025-07-06 20:04:56.248116 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248134 | orchestrator | 2025-07-06 20:04:56.248152 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-06 20:04:56.248171 | orchestrator | Sunday 06 July 2025 20:04:51 +0000 (0:00:00.130) 0:00:35.989 *********** 2025-07-06 20:04:56.248191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:04:56.248209 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:04:56.248228 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248246 | orchestrator | 2025-07-06 20:04:56.248264 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-06 20:04:56.248282 | orchestrator | Sunday 06 July 2025 20:04:51 +0000 (0:00:00.147) 0:00:36.137 *********** 2025-07-06 20:04:56.248301 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248319 | orchestrator | 2025-07-06 20:04:56.248338 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-06 20:04:56.248357 | orchestrator | Sunday 06 July 2025 20:04:51 +0000 (0:00:00.331) 0:00:36.468 *********** 2025-07-06 20:04:56.248375 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:04:56.248394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:04:56.248413 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248431 | orchestrator | 2025-07-06 20:04:56.248449 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-06 20:04:56.248468 | orchestrator | Sunday 06 July 2025 20:04:51 +0000 (0:00:00.153) 0:00:36.622 *********** 2025-07-06 20:04:56.248487 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:04:56.248506 | orchestrator | 2025-07-06 20:04:56.248524 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-06 20:04:56.248542 | orchestrator | Sunday 06 July 2025 20:04:52 +0000 (0:00:00.127) 0:00:36.749 *********** 2025-07-06 20:04:56.248561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:04:56.248579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:04:56.248609 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248621 | orchestrator | 2025-07-06 20:04:56.248632 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-06 20:04:56.248643 | orchestrator | Sunday 06 July 2025 20:04:52 +0000 (0:00:00.143) 0:00:36.893 *********** 2025-07-06 20:04:56.248653 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:04:56.248671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:04:56.248683 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248693 | orchestrator | 2025-07-06 20:04:56.248704 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-06 20:04:56.248716 | orchestrator | Sunday 06 July 2025 20:04:52 +0000 (0:00:00.157) 0:00:37.050 *********** 2025-07-06 20:04:56.248745 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:04:56.248757 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:04:56.248768 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248779 | orchestrator | 2025-07-06 20:04:56.248790 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-06 20:04:56.248801 | orchestrator | Sunday 06 July 2025 20:04:52 +0000 (0:00:00.151) 0:00:37.201 *********** 2025-07-06 20:04:56.248811 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248822 | orchestrator | 2025-07-06 20:04:56.248833 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-06 20:04:56.248844 | orchestrator | Sunday 06 July 2025 20:04:52 +0000 (0:00:00.137) 0:00:37.338 *********** 2025-07-06 20:04:56.248855 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248866 | orchestrator | 2025-07-06 20:04:56.248877 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-06 20:04:56.248920 | orchestrator | Sunday 06 July 2025 20:04:52 +0000 (0:00:00.122) 0:00:37.461 *********** 2025-07-06 20:04:56.248932 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.248943 | orchestrator | 2025-07-06 20:04:56.248954 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-06 20:04:56.248965 | orchestrator | Sunday 06 July 2025 20:04:52 +0000 (0:00:00.134) 0:00:37.595 *********** 2025-07-06 20:04:56.248976 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:04:56.248987 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-06 20:04:56.248998 | orchestrator | } 2025-07-06 20:04:56.249009 | orchestrator | 2025-07-06 20:04:56.249020 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-06 20:04:56.249031 | orchestrator | Sunday 06 July 2025 20:04:53 +0000 (0:00:00.124) 0:00:37.720 *********** 2025-07-06 20:04:56.249042 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:04:56.249052 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-06 20:04:56.249063 | orchestrator | } 2025-07-06 20:04:56.249074 | orchestrator | 2025-07-06 20:04:56.249085 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-06 20:04:56.249095 | orchestrator | Sunday 06 July 2025 20:04:53 +0000 (0:00:00.146) 0:00:37.866 *********** 2025-07-06 20:04:56.249107 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:04:56.249118 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-06 20:04:56.249129 | orchestrator | } 2025-07-06 20:04:56.249140 | orchestrator | 2025-07-06 20:04:56.249151 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-06 20:04:56.249161 | orchestrator | Sunday 06 July 2025 20:04:53 +0000 (0:00:00.143) 0:00:38.009 *********** 2025-07-06 20:04:56.249172 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:04:56.249191 | orchestrator | 2025-07-06 20:04:56.249202 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-06 20:04:56.249212 | orchestrator | Sunday 06 July 2025 20:04:54 +0000 (0:00:00.731) 0:00:38.740 *********** 2025-07-06 20:04:56.249223 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:04:56.249234 | orchestrator | 2025-07-06 20:04:56.249245 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-06 20:04:56.249256 | orchestrator | Sunday 06 July 2025 20:04:54 +0000 (0:00:00.516) 0:00:39.257 *********** 2025-07-06 20:04:56.249266 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:04:56.249277 | orchestrator | 2025-07-06 20:04:56.249288 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-06 20:04:56.249298 | orchestrator | Sunday 06 July 2025 20:04:55 +0000 (0:00:00.519) 0:00:39.777 *********** 2025-07-06 20:04:56.249309 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:04:56.249320 | orchestrator | 2025-07-06 20:04:56.249331 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-06 20:04:56.249342 | orchestrator | Sunday 06 July 2025 20:04:55 +0000 (0:00:00.140) 0:00:39.918 *********** 2025-07-06 20:04:56.249352 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.249363 | orchestrator | 2025-07-06 20:04:56.249374 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-06 20:04:56.249385 | orchestrator | Sunday 06 July 2025 20:04:55 +0000 (0:00:00.118) 0:00:40.036 *********** 2025-07-06 20:04:56.249395 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.249406 | orchestrator | 2025-07-06 20:04:56.249417 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-06 20:04:56.249428 | orchestrator | Sunday 06 July 2025 20:04:55 +0000 (0:00:00.136) 0:00:40.172 *********** 2025-07-06 20:04:56.249438 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:04:56.249449 | orchestrator |  "vgs_report": { 2025-07-06 20:04:56.249460 | orchestrator |  "vg": [] 2025-07-06 20:04:56.249471 | orchestrator |  } 2025-07-06 20:04:56.249482 | orchestrator | } 2025-07-06 20:04:56.249492 | orchestrator | 2025-07-06 20:04:56.249503 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-06 20:04:56.249514 | orchestrator | Sunday 06 July 2025 20:04:55 +0000 (0:00:00.134) 0:00:40.306 *********** 2025-07-06 20:04:56.249525 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.249535 | orchestrator | 2025-07-06 20:04:56.249546 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-06 20:04:56.249557 | orchestrator | Sunday 06 July 2025 20:04:55 +0000 (0:00:00.138) 0:00:40.445 *********** 2025-07-06 20:04:56.249568 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.249578 | orchestrator | 2025-07-06 20:04:56.249589 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-06 20:04:56.249605 | orchestrator | Sunday 06 July 2025 20:04:55 +0000 (0:00:00.144) 0:00:40.589 *********** 2025-07-06 20:04:56.249616 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.249627 | orchestrator | 2025-07-06 20:04:56.249638 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-06 20:04:56.249648 | orchestrator | Sunday 06 July 2025 20:04:56 +0000 (0:00:00.137) 0:00:40.727 *********** 2025-07-06 20:04:56.249659 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:04:56.249670 | orchestrator | 2025-07-06 20:04:56.249681 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-06 20:04:56.249699 | orchestrator | Sunday 06 July 2025 20:04:56 +0000 (0:00:00.134) 0:00:40.862 *********** 2025-07-06 20:05:00.895082 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895181 | orchestrator | 2025-07-06 20:05:00.895196 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-06 20:05:00.895209 | orchestrator | Sunday 06 July 2025 20:04:56 +0000 (0:00:00.126) 0:00:40.989 *********** 2025-07-06 20:05:00.895221 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895232 | orchestrator | 2025-07-06 20:05:00.895243 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-06 20:05:00.895280 | orchestrator | Sunday 06 July 2025 20:04:56 +0000 (0:00:00.316) 0:00:41.305 *********** 2025-07-06 20:05:00.895292 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895302 | orchestrator | 2025-07-06 20:05:00.895313 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-06 20:05:00.895324 | orchestrator | Sunday 06 July 2025 20:04:56 +0000 (0:00:00.157) 0:00:41.463 *********** 2025-07-06 20:05:00.895335 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895346 | orchestrator | 2025-07-06 20:05:00.895356 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-06 20:05:00.895367 | orchestrator | Sunday 06 July 2025 20:04:56 +0000 (0:00:00.139) 0:00:41.602 *********** 2025-07-06 20:05:00.895378 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895388 | orchestrator | 2025-07-06 20:05:00.895399 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-06 20:05:00.895410 | orchestrator | Sunday 06 July 2025 20:04:57 +0000 (0:00:00.140) 0:00:41.743 *********** 2025-07-06 20:05:00.895421 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895432 | orchestrator | 2025-07-06 20:05:00.895442 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-06 20:05:00.895453 | orchestrator | Sunday 06 July 2025 20:04:57 +0000 (0:00:00.134) 0:00:41.878 *********** 2025-07-06 20:05:00.895464 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895474 | orchestrator | 2025-07-06 20:05:00.895485 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-06 20:05:00.895496 | orchestrator | Sunday 06 July 2025 20:04:57 +0000 (0:00:00.141) 0:00:42.019 *********** 2025-07-06 20:05:00.895506 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895517 | orchestrator | 2025-07-06 20:05:00.895530 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-06 20:05:00.895544 | orchestrator | Sunday 06 July 2025 20:04:57 +0000 (0:00:00.140) 0:00:42.160 *********** 2025-07-06 20:05:00.895556 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895569 | orchestrator | 2025-07-06 20:05:00.895582 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-06 20:05:00.895594 | orchestrator | Sunday 06 July 2025 20:04:57 +0000 (0:00:00.139) 0:00:42.299 *********** 2025-07-06 20:05:00.895607 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895620 | orchestrator | 2025-07-06 20:05:00.895633 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-06 20:05:00.895646 | orchestrator | Sunday 06 July 2025 20:04:57 +0000 (0:00:00.137) 0:00:42.437 *********** 2025-07-06 20:05:00.895661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:00.895676 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:00.895688 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895701 | orchestrator | 2025-07-06 20:05:00.895714 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-06 20:05:00.895728 | orchestrator | Sunday 06 July 2025 20:04:57 +0000 (0:00:00.140) 0:00:42.577 *********** 2025-07-06 20:05:00.895742 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:00.895755 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:00.895768 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895781 | orchestrator | 2025-07-06 20:05:00.895794 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-06 20:05:00.895808 | orchestrator | Sunday 06 July 2025 20:04:58 +0000 (0:00:00.153) 0:00:42.731 *********** 2025-07-06 20:05:00.895820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:00.895843 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:00.895856 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895869 | orchestrator | 2025-07-06 20:05:00.895910 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-06 20:05:00.895921 | orchestrator | Sunday 06 July 2025 20:04:58 +0000 (0:00:00.147) 0:00:42.879 *********** 2025-07-06 20:05:00.895933 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:00.895943 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:00.895954 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.895965 | orchestrator | 2025-07-06 20:05:00.895976 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-06 20:05:00.896007 | orchestrator | Sunday 06 July 2025 20:04:58 +0000 (0:00:00.334) 0:00:43.213 *********** 2025-07-06 20:05:00.896019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:00.896030 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:00.896041 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.896051 | orchestrator | 2025-07-06 20:05:00.896062 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-06 20:05:00.896073 | orchestrator | Sunday 06 July 2025 20:04:58 +0000 (0:00:00.148) 0:00:43.362 *********** 2025-07-06 20:05:00.896084 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:00.896095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:00.896106 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.896116 | orchestrator | 2025-07-06 20:05:00.896127 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-06 20:05:00.896138 | orchestrator | Sunday 06 July 2025 20:04:58 +0000 (0:00:00.152) 0:00:43.515 *********** 2025-07-06 20:05:00.896148 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:00.896159 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:00.896170 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.896181 | orchestrator | 2025-07-06 20:05:00.896191 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-06 20:05:00.896202 | orchestrator | Sunday 06 July 2025 20:04:59 +0000 (0:00:00.157) 0:00:43.672 *********** 2025-07-06 20:05:00.896213 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:00.896224 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:00.896280 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.896293 | orchestrator | 2025-07-06 20:05:00.896304 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-06 20:05:00.896323 | orchestrator | Sunday 06 July 2025 20:04:59 +0000 (0:00:00.158) 0:00:43.830 *********** 2025-07-06 20:05:00.896334 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:05:00.896360 | orchestrator | 2025-07-06 20:05:00.896381 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-06 20:05:00.896392 | orchestrator | Sunday 06 July 2025 20:04:59 +0000 (0:00:00.513) 0:00:44.344 *********** 2025-07-06 20:05:00.896404 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:05:00.896414 | orchestrator | 2025-07-06 20:05:00.896425 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-06 20:05:00.896436 | orchestrator | Sunday 06 July 2025 20:05:00 +0000 (0:00:00.553) 0:00:44.897 *********** 2025-07-06 20:05:00.896447 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:05:00.896457 | orchestrator | 2025-07-06 20:05:00.896468 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-06 20:05:00.896479 | orchestrator | Sunday 06 July 2025 20:05:00 +0000 (0:00:00.134) 0:00:45.032 *********** 2025-07-06 20:05:00.896490 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'vg_name': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'}) 2025-07-06 20:05:00.896502 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'vg_name': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'}) 2025-07-06 20:05:00.896512 | orchestrator | 2025-07-06 20:05:00.896523 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-06 20:05:00.896534 | orchestrator | Sunday 06 July 2025 20:05:00 +0000 (0:00:00.176) 0:00:45.208 *********** 2025-07-06 20:05:00.896545 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:00.896556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:00.896566 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:00.896577 | orchestrator | 2025-07-06 20:05:00.896588 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-06 20:05:00.896604 | orchestrator | Sunday 06 July 2025 20:05:00 +0000 (0:00:00.153) 0:00:45.362 *********** 2025-07-06 20:05:00.896615 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:00.896626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:00.896644 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:06.862752 | orchestrator | 2025-07-06 20:05:06.862945 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-06 20:05:06.862974 | orchestrator | Sunday 06 July 2025 20:05:00 +0000 (0:00:00.148) 0:00:45.511 *********** 2025-07-06 20:05:06.862991 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'})  2025-07-06 20:05:06.863007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'})  2025-07-06 20:05:06.863022 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:06.863037 | orchestrator | 2025-07-06 20:05:06.863051 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-06 20:05:06.863066 | orchestrator | Sunday 06 July 2025 20:05:01 +0000 (0:00:00.147) 0:00:45.658 *********** 2025-07-06 20:05:06.863080 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:05:06.863095 | orchestrator |  "lvm_report": { 2025-07-06 20:05:06.863128 | orchestrator |  "lv": [ 2025-07-06 20:05:06.863144 | orchestrator |  { 2025-07-06 20:05:06.863170 | orchestrator |  "lv_name": "osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a", 2025-07-06 20:05:06.863213 | orchestrator |  "vg_name": "ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a" 2025-07-06 20:05:06.863228 | orchestrator |  }, 2025-07-06 20:05:06.863242 | orchestrator |  { 2025-07-06 20:05:06.863256 | orchestrator |  "lv_name": "osd-block-31ad454b-c5b7-54ad-acab-5839a456146b", 2025-07-06 20:05:06.863271 | orchestrator |  "vg_name": "ceph-31ad454b-c5b7-54ad-acab-5839a456146b" 2025-07-06 20:05:06.863283 | orchestrator |  } 2025-07-06 20:05:06.863301 | orchestrator |  ], 2025-07-06 20:05:06.863315 | orchestrator |  "pv": [ 2025-07-06 20:05:06.863328 | orchestrator |  { 2025-07-06 20:05:06.863341 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-06 20:05:06.863355 | orchestrator |  "vg_name": "ceph-31ad454b-c5b7-54ad-acab-5839a456146b" 2025-07-06 20:05:06.863369 | orchestrator |  }, 2025-07-06 20:05:06.863381 | orchestrator |  { 2025-07-06 20:05:06.863394 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-06 20:05:06.863407 | orchestrator |  "vg_name": "ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a" 2025-07-06 20:05:06.863419 | orchestrator |  } 2025-07-06 20:05:06.863433 | orchestrator |  ] 2025-07-06 20:05:06.863446 | orchestrator |  } 2025-07-06 20:05:06.863461 | orchestrator | } 2025-07-06 20:05:06.863477 | orchestrator | 2025-07-06 20:05:06.863490 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-06 20:05:06.863505 | orchestrator | 2025-07-06 20:05:06.863520 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 20:05:06.863533 | orchestrator | Sunday 06 July 2025 20:05:01 +0000 (0:00:00.519) 0:00:46.178 *********** 2025-07-06 20:05:06.863547 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-06 20:05:06.863561 | orchestrator | 2025-07-06 20:05:06.863574 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-06 20:05:06.863586 | orchestrator | Sunday 06 July 2025 20:05:01 +0000 (0:00:00.256) 0:00:46.435 *********** 2025-07-06 20:05:06.863598 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:06.863610 | orchestrator | 2025-07-06 20:05:06.863624 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.863637 | orchestrator | Sunday 06 July 2025 20:05:02 +0000 (0:00:00.224) 0:00:46.659 *********** 2025-07-06 20:05:06.863651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-06 20:05:06.863665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-06 20:05:06.863678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-06 20:05:06.863692 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-06 20:05:06.863706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-06 20:05:06.863719 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-06 20:05:06.863732 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-06 20:05:06.863744 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-06 20:05:06.863757 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-06 20:05:06.863769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-06 20:05:06.863782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-06 20:05:06.863795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-06 20:05:06.863808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-06 20:05:06.863821 | orchestrator | 2025-07-06 20:05:06.863834 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.863904 | orchestrator | Sunday 06 July 2025 20:05:02 +0000 (0:00:00.413) 0:00:47.073 *********** 2025-07-06 20:05:06.863921 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:06.863936 | orchestrator | 2025-07-06 20:05:06.863950 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.863963 | orchestrator | Sunday 06 July 2025 20:05:02 +0000 (0:00:00.209) 0:00:47.282 *********** 2025-07-06 20:05:06.863978 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:06.863991 | orchestrator | 2025-07-06 20:05:06.864005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864046 | orchestrator | Sunday 06 July 2025 20:05:02 +0000 (0:00:00.196) 0:00:47.478 *********** 2025-07-06 20:05:06.864062 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:06.864076 | orchestrator | 2025-07-06 20:05:06.864090 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864103 | orchestrator | Sunday 06 July 2025 20:05:03 +0000 (0:00:00.232) 0:00:47.710 *********** 2025-07-06 20:05:06.864117 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:06.864206 | orchestrator | 2025-07-06 20:05:06.864220 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864234 | orchestrator | Sunday 06 July 2025 20:05:03 +0000 (0:00:00.191) 0:00:47.902 *********** 2025-07-06 20:05:06.864248 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:06.864262 | orchestrator | 2025-07-06 20:05:06.864275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864289 | orchestrator | Sunday 06 July 2025 20:05:03 +0000 (0:00:00.191) 0:00:48.094 *********** 2025-07-06 20:05:06.864302 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:06.864315 | orchestrator | 2025-07-06 20:05:06.864329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864343 | orchestrator | Sunday 06 July 2025 20:05:04 +0000 (0:00:00.558) 0:00:48.652 *********** 2025-07-06 20:05:06.864356 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:06.864370 | orchestrator | 2025-07-06 20:05:06.864382 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864394 | orchestrator | Sunday 06 July 2025 20:05:04 +0000 (0:00:00.197) 0:00:48.850 *********** 2025-07-06 20:05:06.864407 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:06.864419 | orchestrator | 2025-07-06 20:05:06.864432 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864446 | orchestrator | Sunday 06 July 2025 20:05:04 +0000 (0:00:00.199) 0:00:49.049 *********** 2025-07-06 20:05:06.864459 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5) 2025-07-06 20:05:06.864475 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5) 2025-07-06 20:05:06.864488 | orchestrator | 2025-07-06 20:05:06.864502 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864515 | orchestrator | Sunday 06 July 2025 20:05:04 +0000 (0:00:00.387) 0:00:49.437 *********** 2025-07-06 20:05:06.864528 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332) 2025-07-06 20:05:06.864542 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332) 2025-07-06 20:05:06.864554 | orchestrator | 2025-07-06 20:05:06.864567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864581 | orchestrator | Sunday 06 July 2025 20:05:05 +0000 (0:00:00.411) 0:00:49.849 *********** 2025-07-06 20:05:06.864593 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a) 2025-07-06 20:05:06.864606 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a) 2025-07-06 20:05:06.864620 | orchestrator | 2025-07-06 20:05:06.864633 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864660 | orchestrator | Sunday 06 July 2025 20:05:05 +0000 (0:00:00.440) 0:00:50.289 *********** 2025-07-06 20:05:06.864673 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5) 2025-07-06 20:05:06.864686 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5) 2025-07-06 20:05:06.864698 | orchestrator | 2025-07-06 20:05:06.864711 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-06 20:05:06.864724 | orchestrator | Sunday 06 July 2025 20:05:06 +0000 (0:00:00.427) 0:00:50.716 *********** 2025-07-06 20:05:06.864737 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-06 20:05:06.864749 | orchestrator | 2025-07-06 20:05:06.864763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:06.864777 | orchestrator | Sunday 06 July 2025 20:05:06 +0000 (0:00:00.331) 0:00:51.048 *********** 2025-07-06 20:05:06.864792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-06 20:05:06.864806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-06 20:05:06.864820 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-06 20:05:06.864833 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-06 20:05:06.864845 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-06 20:05:06.864860 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-06 20:05:06.864907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-06 20:05:06.864930 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-06 20:05:06.864944 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-06 20:05:06.864957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-06 20:05:06.864969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-06 20:05:06.864997 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-06 20:05:15.471130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-06 20:05:15.471266 | orchestrator | 2025-07-06 20:05:15.471284 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.471298 | orchestrator | Sunday 06 July 2025 20:05:06 +0000 (0:00:00.422) 0:00:51.471 *********** 2025-07-06 20:05:15.471314 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.471335 | orchestrator | 2025-07-06 20:05:15.471355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.471374 | orchestrator | Sunday 06 July 2025 20:05:07 +0000 (0:00:00.186) 0:00:51.657 *********** 2025-07-06 20:05:15.471393 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.471410 | orchestrator | 2025-07-06 20:05:15.471428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.471446 | orchestrator | Sunday 06 July 2025 20:05:07 +0000 (0:00:00.201) 0:00:51.858 *********** 2025-07-06 20:05:15.471464 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.471482 | orchestrator | 2025-07-06 20:05:15.471498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.471515 | orchestrator | Sunday 06 July 2025 20:05:07 +0000 (0:00:00.605) 0:00:52.464 *********** 2025-07-06 20:05:15.471533 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.471551 | orchestrator | 2025-07-06 20:05:15.471568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.471586 | orchestrator | Sunday 06 July 2025 20:05:08 +0000 (0:00:00.206) 0:00:52.671 *********** 2025-07-06 20:05:15.471636 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.471656 | orchestrator | 2025-07-06 20:05:15.471676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.471695 | orchestrator | Sunday 06 July 2025 20:05:08 +0000 (0:00:00.191) 0:00:52.863 *********** 2025-07-06 20:05:15.471713 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.471733 | orchestrator | 2025-07-06 20:05:15.471753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.471767 | orchestrator | Sunday 06 July 2025 20:05:08 +0000 (0:00:00.198) 0:00:53.062 *********** 2025-07-06 20:05:15.471781 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.471801 | orchestrator | 2025-07-06 20:05:15.471883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.471902 | orchestrator | Sunday 06 July 2025 20:05:08 +0000 (0:00:00.198) 0:00:53.260 *********** 2025-07-06 20:05:15.471923 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.471935 | orchestrator | 2025-07-06 20:05:15.471946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.471957 | orchestrator | Sunday 06 July 2025 20:05:08 +0000 (0:00:00.181) 0:00:53.442 *********** 2025-07-06 20:05:15.471968 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-06 20:05:15.471980 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-07-06 20:05:15.471992 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-07-06 20:05:15.472003 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-07-06 20:05:15.472014 | orchestrator | 2025-07-06 20:05:15.472025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.472036 | orchestrator | Sunday 06 July 2025 20:05:09 +0000 (0:00:00.625) 0:00:54.067 *********** 2025-07-06 20:05:15.472046 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472057 | orchestrator | 2025-07-06 20:05:15.472068 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.472079 | orchestrator | Sunday 06 July 2025 20:05:09 +0000 (0:00:00.198) 0:00:54.266 *********** 2025-07-06 20:05:15.472090 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472100 | orchestrator | 2025-07-06 20:05:15.472111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.472122 | orchestrator | Sunday 06 July 2025 20:05:09 +0000 (0:00:00.201) 0:00:54.468 *********** 2025-07-06 20:05:15.472133 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472144 | orchestrator | 2025-07-06 20:05:15.472154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-06 20:05:15.472165 | orchestrator | Sunday 06 July 2025 20:05:10 +0000 (0:00:00.185) 0:00:54.654 *********** 2025-07-06 20:05:15.472176 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472187 | orchestrator | 2025-07-06 20:05:15.472198 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-06 20:05:15.472208 | orchestrator | Sunday 06 July 2025 20:05:10 +0000 (0:00:00.191) 0:00:54.845 *********** 2025-07-06 20:05:15.472219 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472230 | orchestrator | 2025-07-06 20:05:15.472241 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-06 20:05:15.472251 | orchestrator | Sunday 06 July 2025 20:05:10 +0000 (0:00:00.341) 0:00:55.186 *********** 2025-07-06 20:05:15.472262 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'fc1251bd-e592-50b3-b197-385f411a7339'}}) 2025-07-06 20:05:15.472274 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b5f0fce0-432f-57fb-bebd-426658f60987'}}) 2025-07-06 20:05:15.472284 | orchestrator | 2025-07-06 20:05:15.472295 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-06 20:05:15.472306 | orchestrator | Sunday 06 July 2025 20:05:10 +0000 (0:00:00.194) 0:00:55.380 *********** 2025-07-06 20:05:15.472318 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'}) 2025-07-06 20:05:15.472342 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'}) 2025-07-06 20:05:15.472353 | orchestrator | 2025-07-06 20:05:15.472364 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-06 20:05:15.472397 | orchestrator | Sunday 06 July 2025 20:05:12 +0000 (0:00:01.851) 0:00:57.232 *********** 2025-07-06 20:05:15.472409 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:15.472422 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:15.472433 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472443 | orchestrator | 2025-07-06 20:05:15.472454 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-06 20:05:15.472465 | orchestrator | Sunday 06 July 2025 20:05:12 +0000 (0:00:00.135) 0:00:57.368 *********** 2025-07-06 20:05:15.472476 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'}) 2025-07-06 20:05:15.472505 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'}) 2025-07-06 20:05:15.472516 | orchestrator | 2025-07-06 20:05:15.472527 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-06 20:05:15.472538 | orchestrator | Sunday 06 July 2025 20:05:14 +0000 (0:00:01.338) 0:00:58.706 *********** 2025-07-06 20:05:15.472549 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:15.472560 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:15.472571 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472581 | orchestrator | 2025-07-06 20:05:15.472592 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-06 20:05:15.472603 | orchestrator | Sunday 06 July 2025 20:05:14 +0000 (0:00:00.136) 0:00:58.842 *********** 2025-07-06 20:05:15.472614 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472625 | orchestrator | 2025-07-06 20:05:15.472636 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-06 20:05:15.472647 | orchestrator | Sunday 06 July 2025 20:05:14 +0000 (0:00:00.134) 0:00:58.977 *********** 2025-07-06 20:05:15.472657 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:15.472669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:15.472680 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472690 | orchestrator | 2025-07-06 20:05:15.472702 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-06 20:05:15.472728 | orchestrator | Sunday 06 July 2025 20:05:14 +0000 (0:00:00.140) 0:00:59.117 *********** 2025-07-06 20:05:15.472739 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472750 | orchestrator | 2025-07-06 20:05:15.472761 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-06 20:05:15.472772 | orchestrator | Sunday 06 July 2025 20:05:14 +0000 (0:00:00.126) 0:00:59.244 *********** 2025-07-06 20:05:15.472783 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:15.472801 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:15.472812 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472823 | orchestrator | 2025-07-06 20:05:15.472833 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-06 20:05:15.472844 | orchestrator | Sunday 06 July 2025 20:05:14 +0000 (0:00:00.135) 0:00:59.380 *********** 2025-07-06 20:05:15.472901 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472912 | orchestrator | 2025-07-06 20:05:15.472923 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-06 20:05:15.472934 | orchestrator | Sunday 06 July 2025 20:05:14 +0000 (0:00:00.126) 0:00:59.506 *********** 2025-07-06 20:05:15.472944 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:15.472955 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:15.472966 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:15.472977 | orchestrator | 2025-07-06 20:05:15.472994 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-06 20:05:15.473006 | orchestrator | Sunday 06 July 2025 20:05:15 +0000 (0:00:00.129) 0:00:59.636 *********** 2025-07-06 20:05:15.473017 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:15.473028 | orchestrator | 2025-07-06 20:05:15.473039 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-06 20:05:15.473050 | orchestrator | Sunday 06 July 2025 20:05:15 +0000 (0:00:00.132) 0:00:59.768 *********** 2025-07-06 20:05:15.473069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:21.598429 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:21.598545 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.598562 | orchestrator | 2025-07-06 20:05:21.598574 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-06 20:05:21.598587 | orchestrator | Sunday 06 July 2025 20:05:15 +0000 (0:00:00.320) 0:01:00.089 *********** 2025-07-06 20:05:21.598599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:21.598610 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:21.598621 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.598632 | orchestrator | 2025-07-06 20:05:21.598644 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-06 20:05:21.598654 | orchestrator | Sunday 06 July 2025 20:05:15 +0000 (0:00:00.150) 0:01:00.239 *********** 2025-07-06 20:05:21.598665 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:21.598677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:21.598688 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.598698 | orchestrator | 2025-07-06 20:05:21.598709 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-06 20:05:21.598720 | orchestrator | Sunday 06 July 2025 20:05:15 +0000 (0:00:00.151) 0:01:00.391 *********** 2025-07-06 20:05:21.598731 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.598741 | orchestrator | 2025-07-06 20:05:21.598752 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-06 20:05:21.598786 | orchestrator | Sunday 06 July 2025 20:05:15 +0000 (0:00:00.136) 0:01:00.528 *********** 2025-07-06 20:05:21.598797 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.598808 | orchestrator | 2025-07-06 20:05:21.598819 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-06 20:05:21.598830 | orchestrator | Sunday 06 July 2025 20:05:16 +0000 (0:00:00.154) 0:01:00.682 *********** 2025-07-06 20:05:21.598866 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.598877 | orchestrator | 2025-07-06 20:05:21.598888 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-06 20:05:21.598899 | orchestrator | Sunday 06 July 2025 20:05:16 +0000 (0:00:00.145) 0:01:00.827 *********** 2025-07-06 20:05:21.598910 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:05:21.598922 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-06 20:05:21.598933 | orchestrator | } 2025-07-06 20:05:21.598944 | orchestrator | 2025-07-06 20:05:21.598958 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-06 20:05:21.598972 | orchestrator | Sunday 06 July 2025 20:05:16 +0000 (0:00:00.150) 0:01:00.977 *********** 2025-07-06 20:05:21.598985 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:05:21.598998 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-06 20:05:21.599010 | orchestrator | } 2025-07-06 20:05:21.599023 | orchestrator | 2025-07-06 20:05:21.599035 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-06 20:05:21.599048 | orchestrator | Sunday 06 July 2025 20:05:16 +0000 (0:00:00.144) 0:01:01.122 *********** 2025-07-06 20:05:21.599060 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:05:21.599072 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-06 20:05:21.599085 | orchestrator | } 2025-07-06 20:05:21.599099 | orchestrator | 2025-07-06 20:05:21.599112 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-06 20:05:21.599125 | orchestrator | Sunday 06 July 2025 20:05:16 +0000 (0:00:00.147) 0:01:01.269 *********** 2025-07-06 20:05:21.599138 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:21.599150 | orchestrator | 2025-07-06 20:05:21.599164 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-06 20:05:21.599176 | orchestrator | Sunday 06 July 2025 20:05:17 +0000 (0:00:00.529) 0:01:01.798 *********** 2025-07-06 20:05:21.599189 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:21.599202 | orchestrator | 2025-07-06 20:05:21.599215 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-06 20:05:21.599227 | orchestrator | Sunday 06 July 2025 20:05:17 +0000 (0:00:00.527) 0:01:02.326 *********** 2025-07-06 20:05:21.599240 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:21.599252 | orchestrator | 2025-07-06 20:05:21.599265 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-06 20:05:21.599279 | orchestrator | Sunday 06 July 2025 20:05:18 +0000 (0:00:00.523) 0:01:02.849 *********** 2025-07-06 20:05:21.599291 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:21.599304 | orchestrator | 2025-07-06 20:05:21.599314 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-06 20:05:21.599325 | orchestrator | Sunday 06 July 2025 20:05:18 +0000 (0:00:00.360) 0:01:03.210 *********** 2025-07-06 20:05:21.599349 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599360 | orchestrator | 2025-07-06 20:05:21.599371 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-06 20:05:21.599382 | orchestrator | Sunday 06 July 2025 20:05:18 +0000 (0:00:00.112) 0:01:03.323 *********** 2025-07-06 20:05:21.599393 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599404 | orchestrator | 2025-07-06 20:05:21.599414 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-06 20:05:21.599425 | orchestrator | Sunday 06 July 2025 20:05:18 +0000 (0:00:00.116) 0:01:03.440 *********** 2025-07-06 20:05:21.599436 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:05:21.599455 | orchestrator |  "vgs_report": { 2025-07-06 20:05:21.599466 | orchestrator |  "vg": [] 2025-07-06 20:05:21.599496 | orchestrator |  } 2025-07-06 20:05:21.599507 | orchestrator | } 2025-07-06 20:05:21.599518 | orchestrator | 2025-07-06 20:05:21.599529 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-06 20:05:21.599539 | orchestrator | Sunday 06 July 2025 20:05:18 +0000 (0:00:00.155) 0:01:03.595 *********** 2025-07-06 20:05:21.599550 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599561 | orchestrator | 2025-07-06 20:05:21.599573 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-06 20:05:21.599584 | orchestrator | Sunday 06 July 2025 20:05:19 +0000 (0:00:00.137) 0:01:03.732 *********** 2025-07-06 20:05:21.599594 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599605 | orchestrator | 2025-07-06 20:05:21.599616 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-06 20:05:21.599626 | orchestrator | Sunday 06 July 2025 20:05:19 +0000 (0:00:00.132) 0:01:03.864 *********** 2025-07-06 20:05:21.599637 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599648 | orchestrator | 2025-07-06 20:05:21.599658 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-06 20:05:21.599669 | orchestrator | Sunday 06 July 2025 20:05:19 +0000 (0:00:00.138) 0:01:04.003 *********** 2025-07-06 20:05:21.599680 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599690 | orchestrator | 2025-07-06 20:05:21.599701 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-06 20:05:21.599712 | orchestrator | Sunday 06 July 2025 20:05:19 +0000 (0:00:00.128) 0:01:04.131 *********** 2025-07-06 20:05:21.599722 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599733 | orchestrator | 2025-07-06 20:05:21.599744 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-06 20:05:21.599754 | orchestrator | Sunday 06 July 2025 20:05:19 +0000 (0:00:00.138) 0:01:04.269 *********** 2025-07-06 20:05:21.599765 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599776 | orchestrator | 2025-07-06 20:05:21.599786 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-06 20:05:21.599797 | orchestrator | Sunday 06 July 2025 20:05:19 +0000 (0:00:00.114) 0:01:04.384 *********** 2025-07-06 20:05:21.599808 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599818 | orchestrator | 2025-07-06 20:05:21.599829 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-06 20:05:21.599862 | orchestrator | Sunday 06 July 2025 20:05:19 +0000 (0:00:00.130) 0:01:04.514 *********** 2025-07-06 20:05:21.599874 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599885 | orchestrator | 2025-07-06 20:05:21.599895 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-06 20:05:21.599906 | orchestrator | Sunday 06 July 2025 20:05:20 +0000 (0:00:00.171) 0:01:04.686 *********** 2025-07-06 20:05:21.599917 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599927 | orchestrator | 2025-07-06 20:05:21.599938 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-06 20:05:21.599949 | orchestrator | Sunday 06 July 2025 20:05:20 +0000 (0:00:00.364) 0:01:05.051 *********** 2025-07-06 20:05:21.599959 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.599970 | orchestrator | 2025-07-06 20:05:21.599981 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-06 20:05:21.599991 | orchestrator | Sunday 06 July 2025 20:05:20 +0000 (0:00:00.136) 0:01:05.187 *********** 2025-07-06 20:05:21.600002 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.600013 | orchestrator | 2025-07-06 20:05:21.600023 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-06 20:05:21.600034 | orchestrator | Sunday 06 July 2025 20:05:20 +0000 (0:00:00.131) 0:01:05.319 *********** 2025-07-06 20:05:21.600045 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.600056 | orchestrator | 2025-07-06 20:05:21.600066 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-06 20:05:21.600084 | orchestrator | Sunday 06 July 2025 20:05:20 +0000 (0:00:00.149) 0:01:05.468 *********** 2025-07-06 20:05:21.600095 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.600106 | orchestrator | 2025-07-06 20:05:21.600117 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-06 20:05:21.600128 | orchestrator | Sunday 06 July 2025 20:05:20 +0000 (0:00:00.137) 0:01:05.605 *********** 2025-07-06 20:05:21.600138 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.600149 | orchestrator | 2025-07-06 20:05:21.600160 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-06 20:05:21.600170 | orchestrator | Sunday 06 July 2025 20:05:21 +0000 (0:00:00.130) 0:01:05.736 *********** 2025-07-06 20:05:21.600182 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:21.600193 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:21.600204 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.600214 | orchestrator | 2025-07-06 20:05:21.600225 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-06 20:05:21.600236 | orchestrator | Sunday 06 July 2025 20:05:21 +0000 (0:00:00.169) 0:01:05.906 *********** 2025-07-06 20:05:21.600253 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:21.600264 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:21.600275 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:21.600286 | orchestrator | 2025-07-06 20:05:21.600297 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-06 20:05:21.600307 | orchestrator | Sunday 06 July 2025 20:05:21 +0000 (0:00:00.152) 0:01:06.059 *********** 2025-07-06 20:05:21.600326 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:24.612678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:24.612787 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:24.612802 | orchestrator | 2025-07-06 20:05:24.612815 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-06 20:05:24.612827 | orchestrator | Sunday 06 July 2025 20:05:21 +0000 (0:00:00.156) 0:01:06.215 *********** 2025-07-06 20:05:24.612862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:24.612874 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:24.612884 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:24.612895 | orchestrator | 2025-07-06 20:05:24.612907 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-06 20:05:24.612918 | orchestrator | Sunday 06 July 2025 20:05:21 +0000 (0:00:00.188) 0:01:06.404 *********** 2025-07-06 20:05:24.612929 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:24.612940 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:24.612951 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:24.612962 | orchestrator | 2025-07-06 20:05:24.613000 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-06 20:05:24.613011 | orchestrator | Sunday 06 July 2025 20:05:21 +0000 (0:00:00.166) 0:01:06.571 *********** 2025-07-06 20:05:24.613023 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:24.613034 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:24.613045 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:24.613056 | orchestrator | 2025-07-06 20:05:24.613067 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-06 20:05:24.613078 | orchestrator | Sunday 06 July 2025 20:05:22 +0000 (0:00:00.140) 0:01:06.711 *********** 2025-07-06 20:05:24.613088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:24.613099 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:24.613110 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:24.613121 | orchestrator | 2025-07-06 20:05:24.613132 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-06 20:05:24.613142 | orchestrator | Sunday 06 July 2025 20:05:22 +0000 (0:00:00.376) 0:01:07.088 *********** 2025-07-06 20:05:24.613154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:24.613167 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:24.613179 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:24.613191 | orchestrator | 2025-07-06 20:05:24.613204 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-06 20:05:24.613216 | orchestrator | Sunday 06 July 2025 20:05:22 +0000 (0:00:00.153) 0:01:07.242 *********** 2025-07-06 20:05:24.613229 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:24.613243 | orchestrator | 2025-07-06 20:05:24.613255 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-06 20:05:24.613268 | orchestrator | Sunday 06 July 2025 20:05:23 +0000 (0:00:00.524) 0:01:07.766 *********** 2025-07-06 20:05:24.613281 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:24.613292 | orchestrator | 2025-07-06 20:05:24.613305 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-06 20:05:24.613317 | orchestrator | Sunday 06 July 2025 20:05:23 +0000 (0:00:00.530) 0:01:08.296 *********** 2025-07-06 20:05:24.613330 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:24.613343 | orchestrator | 2025-07-06 20:05:24.613356 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-06 20:05:24.613368 | orchestrator | Sunday 06 July 2025 20:05:23 +0000 (0:00:00.140) 0:01:08.436 *********** 2025-07-06 20:05:24.613381 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'vg_name': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'}) 2025-07-06 20:05:24.613394 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'vg_name': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'}) 2025-07-06 20:05:24.613407 | orchestrator | 2025-07-06 20:05:24.613420 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-06 20:05:24.613432 | orchestrator | Sunday 06 July 2025 20:05:23 +0000 (0:00:00.168) 0:01:08.605 *********** 2025-07-06 20:05:24.613465 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:24.613478 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:24.613499 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:24.613511 | orchestrator | 2025-07-06 20:05:24.613524 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-06 20:05:24.613535 | orchestrator | Sunday 06 July 2025 20:05:24 +0000 (0:00:00.150) 0:01:08.755 *********** 2025-07-06 20:05:24.613546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:24.613557 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:24.613568 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:24.613579 | orchestrator | 2025-07-06 20:05:24.613590 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-06 20:05:24.613601 | orchestrator | Sunday 06 July 2025 20:05:24 +0000 (0:00:00.146) 0:01:08.902 *********** 2025-07-06 20:05:24.613612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'})  2025-07-06 20:05:24.613623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'})  2025-07-06 20:05:24.613634 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:24.613645 | orchestrator | 2025-07-06 20:05:24.613655 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-06 20:05:24.613684 | orchestrator | Sunday 06 July 2025 20:05:24 +0000 (0:00:00.151) 0:01:09.054 *********** 2025-07-06 20:05:24.613696 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:05:24.613707 | orchestrator |  "lvm_report": { 2025-07-06 20:05:24.613718 | orchestrator |  "lv": [ 2025-07-06 20:05:24.613729 | orchestrator |  { 2025-07-06 20:05:24.613740 | orchestrator |  "lv_name": "osd-block-b5f0fce0-432f-57fb-bebd-426658f60987", 2025-07-06 20:05:24.613751 | orchestrator |  "vg_name": "ceph-b5f0fce0-432f-57fb-bebd-426658f60987" 2025-07-06 20:05:24.613762 | orchestrator |  }, 2025-07-06 20:05:24.613773 | orchestrator |  { 2025-07-06 20:05:24.613784 | orchestrator |  "lv_name": "osd-block-fc1251bd-e592-50b3-b197-385f411a7339", 2025-07-06 20:05:24.613794 | orchestrator |  "vg_name": "ceph-fc1251bd-e592-50b3-b197-385f411a7339" 2025-07-06 20:05:24.613805 | orchestrator |  } 2025-07-06 20:05:24.613816 | orchestrator |  ], 2025-07-06 20:05:24.613827 | orchestrator |  "pv": [ 2025-07-06 20:05:24.613873 | orchestrator |  { 2025-07-06 20:05:24.613884 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-06 20:05:24.613894 | orchestrator |  "vg_name": "ceph-fc1251bd-e592-50b3-b197-385f411a7339" 2025-07-06 20:05:24.613905 | orchestrator |  }, 2025-07-06 20:05:24.613916 | orchestrator |  { 2025-07-06 20:05:24.613927 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-06 20:05:24.613938 | orchestrator |  "vg_name": "ceph-b5f0fce0-432f-57fb-bebd-426658f60987" 2025-07-06 20:05:24.613948 | orchestrator |  } 2025-07-06 20:05:24.613959 | orchestrator |  ] 2025-07-06 20:05:24.613970 | orchestrator |  } 2025-07-06 20:05:24.613980 | orchestrator | } 2025-07-06 20:05:24.613992 | orchestrator | 2025-07-06 20:05:24.614002 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:05:24.614013 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-06 20:05:24.614134 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-06 20:05:24.614157 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-06 20:05:24.614168 | orchestrator | 2025-07-06 20:05:24.614179 | orchestrator | 2025-07-06 20:05:24.614189 | orchestrator | 2025-07-06 20:05:24.614200 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:05:24.614211 | orchestrator | Sunday 06 July 2025 20:05:24 +0000 (0:00:00.149) 0:01:09.203 *********** 2025-07-06 20:05:24.614222 | orchestrator | =============================================================================== 2025-07-06 20:05:24.614233 | orchestrator | Create block VGs -------------------------------------------------------- 5.77s 2025-07-06 20:05:24.614250 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2025-07-06 20:05:24.614274 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.94s 2025-07-06 20:05:24.614295 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.69s 2025-07-06 20:05:24.614306 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.62s 2025-07-06 20:05:24.614317 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.61s 2025-07-06 20:05:24.614328 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2025-07-06 20:05:24.614338 | orchestrator | Add known partitions to the list of available block devices ------------- 1.30s 2025-07-06 20:05:24.614357 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2025-07-06 20:05:24.972996 | orchestrator | Print LVM report data --------------------------------------------------- 0.94s 2025-07-06 20:05:24.973096 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2025-07-06 20:05:24.973110 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2025-07-06 20:05:24.973122 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2025-07-06 20:05:24.973133 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.68s 2025-07-06 20:05:24.973144 | orchestrator | Combine JSON from _db/wal/db_wal_vgs_cmd_output ------------------------- 0.65s 2025-07-06 20:05:24.973155 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.65s 2025-07-06 20:05:24.973165 | orchestrator | Get initial list of available block devices ----------------------------- 0.64s 2025-07-06 20:05:24.973176 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-07-06 20:05:24.973186 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.63s 2025-07-06 20:05:24.973197 | orchestrator | Print size needed for WAL LVs on ceph_db_wal_devices -------------------- 0.63s 2025-07-06 20:05:37.225217 | orchestrator | 2025-07-06 20:05:37 | INFO  | Task 8f090674-4352-42d9-b492-b93bc7cc7fe1 (facts) was prepared for execution. 2025-07-06 20:05:37.225353 | orchestrator | 2025-07-06 20:05:37 | INFO  | It takes a moment until task 8f090674-4352-42d9-b492-b93bc7cc7fe1 (facts) has been started and output is visible here. 2025-07-06 20:05:49.601722 | orchestrator | 2025-07-06 20:05:49.601884 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-06 20:05:49.601904 | orchestrator | 2025-07-06 20:05:49.601917 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-06 20:05:49.601931 | orchestrator | Sunday 06 July 2025 20:05:41 +0000 (0:00:00.277) 0:00:00.277 *********** 2025-07-06 20:05:49.601943 | orchestrator | ok: [testbed-manager] 2025-07-06 20:05:49.601957 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:05:49.601969 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:05:49.601981 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:05:49.601993 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:05:49.602006 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:05:49.602076 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:49.602091 | orchestrator | 2025-07-06 20:05:49.602104 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-06 20:05:49.602145 | orchestrator | Sunday 06 July 2025 20:05:42 +0000 (0:00:01.095) 0:00:01.372 *********** 2025-07-06 20:05:49.602158 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:05:49.602172 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:05:49.602183 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:05:49.602196 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:05:49.602207 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:05:49.602220 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:49.602232 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:49.602244 | orchestrator | 2025-07-06 20:05:49.602257 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 20:05:49.602270 | orchestrator | 2025-07-06 20:05:49.602285 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 20:05:49.602301 | orchestrator | Sunday 06 July 2025 20:05:43 +0000 (0:00:01.219) 0:00:02.591 *********** 2025-07-06 20:05:49.602315 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:05:49.602329 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:05:49.602343 | orchestrator | ok: [testbed-manager] 2025-07-06 20:05:49.602358 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:05:49.602370 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:05:49.602382 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:05:49.602394 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:05:49.602407 | orchestrator | 2025-07-06 20:05:49.602418 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-06 20:05:49.602432 | orchestrator | 2025-07-06 20:05:49.602444 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-06 20:05:49.602457 | orchestrator | Sunday 06 July 2025 20:05:48 +0000 (0:00:05.118) 0:00:07.710 *********** 2025-07-06 20:05:49.602472 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:05:49.602485 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:05:49.602500 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:05:49.602512 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:05:49.602525 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:05:49.602538 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:05:49.602550 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:05:49.602564 | orchestrator | 2025-07-06 20:05:49.602577 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:05:49.602592 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:05:49.602608 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:05:49.602636 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:05:49.602649 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:05:49.602660 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:05:49.602672 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:05:49.602684 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:05:49.602696 | orchestrator | 2025-07-06 20:05:49.602708 | orchestrator | 2025-07-06 20:05:49.602720 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:05:49.602733 | orchestrator | Sunday 06 July 2025 20:05:49 +0000 (0:00:00.515) 0:00:08.226 *********** 2025-07-06 20:05:49.602745 | orchestrator | =============================================================================== 2025-07-06 20:05:49.602769 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.12s 2025-07-06 20:05:49.602781 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-07-06 20:05:49.602841 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.10s 2025-07-06 20:05:49.602853 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-07-06 20:05:49.891303 | orchestrator | 2025-07-06 20:05:49.894703 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Jul 6 20:05:49 UTC 2025 2025-07-06 20:05:49.894760 | orchestrator | 2025-07-06 20:05:51.685985 | orchestrator | 2025-07-06 20:05:51 | INFO  | Collection nutshell is prepared for execution 2025-07-06 20:05:51.686085 | orchestrator | 2025-07-06 20:05:51 | INFO  | D [0] - dotfiles 2025-07-06 20:06:01.733587 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [0] - homer 2025-07-06 20:06:01.733731 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [0] - netdata 2025-07-06 20:06:01.733758 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [0] - openstackclient 2025-07-06 20:06:01.733858 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [0] - phpmyadmin 2025-07-06 20:06:01.733880 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [0] - common 2025-07-06 20:06:01.737080 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [1] -- loadbalancer 2025-07-06 20:06:01.737158 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [2] --- opensearch 2025-07-06 20:06:01.737180 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [2] --- mariadb-ng 2025-07-06 20:06:01.737610 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [3] ---- horizon 2025-07-06 20:06:01.737693 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [3] ---- keystone 2025-07-06 20:06:01.737733 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [4] ----- neutron 2025-07-06 20:06:01.738141 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [5] ------ wait-for-nova 2025-07-06 20:06:01.738211 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [5] ------ octavia 2025-07-06 20:06:01.739226 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [4] ----- barbican 2025-07-06 20:06:01.739250 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [4] ----- designate 2025-07-06 20:06:01.740168 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [4] ----- ironic 2025-07-06 20:06:01.740383 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [4] ----- placement 2025-07-06 20:06:01.740407 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [4] ----- magnum 2025-07-06 20:06:01.740426 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [1] -- openvswitch 2025-07-06 20:06:01.740456 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [2] --- ovn 2025-07-06 20:06:01.740660 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [1] -- memcached 2025-07-06 20:06:01.741014 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [1] -- redis 2025-07-06 20:06:01.741040 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [1] -- rabbitmq-ng 2025-07-06 20:06:01.741134 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [0] - kubernetes 2025-07-06 20:06:01.743567 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [1] -- kubeconfig 2025-07-06 20:06:01.743597 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [1] -- copy-kubeconfig 2025-07-06 20:06:01.743736 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [0] - ceph 2025-07-06 20:06:01.745819 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [1] -- ceph-pools 2025-07-06 20:06:01.745946 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [2] --- copy-ceph-keys 2025-07-06 20:06:01.746151 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [3] ---- cephclient 2025-07-06 20:06:01.746167 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-07-06 20:06:01.746209 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [4] ----- wait-for-keystone 2025-07-06 20:06:01.746237 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [5] ------ kolla-ceph-rgw 2025-07-06 20:06:01.746248 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [5] ------ glance 2025-07-06 20:06:01.746476 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [5] ------ cinder 2025-07-06 20:06:01.746498 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [5] ------ nova 2025-07-06 20:06:01.746821 | orchestrator | 2025-07-06 20:06:01 | INFO  | A [4] ----- prometheus 2025-07-06 20:06:01.746842 | orchestrator | 2025-07-06 20:06:01 | INFO  | D [5] ------ grafana 2025-07-06 20:06:01.903206 | orchestrator | 2025-07-06 20:06:01 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-07-06 20:06:01.903302 | orchestrator | 2025-07-06 20:06:01 | INFO  | Tasks are running in the background 2025-07-06 20:06:04.210283 | orchestrator | 2025-07-06 20:06:04 | INFO  | No task IDs specified, wait for all currently running tasks 2025-07-06 20:06:06.346118 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task cf6788d9-4707-4857-ab95-e34460d90658 is in state STARTED 2025-07-06 20:06:06.348955 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:06.349032 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:06.349046 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:06.349056 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:06.349075 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:06.349818 | orchestrator | 2025-07-06 20:06:06 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:06.349853 | orchestrator | 2025-07-06 20:06:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:09.413567 | orchestrator | 2025-07-06 20:06:09 | INFO  | Task cf6788d9-4707-4857-ab95-e34460d90658 is in state STARTED 2025-07-06 20:06:09.413825 | orchestrator | 2025-07-06 20:06:09 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:09.413860 | orchestrator | 2025-07-06 20:06:09 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:09.414234 | orchestrator | 2025-07-06 20:06:09 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:09.414643 | orchestrator | 2025-07-06 20:06:09 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:09.416432 | orchestrator | 2025-07-06 20:06:09 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:09.418315 | orchestrator | 2025-07-06 20:06:09 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:09.419345 | orchestrator | 2025-07-06 20:06:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:12.446001 | orchestrator | 2025-07-06 20:06:12 | INFO  | Task cf6788d9-4707-4857-ab95-e34460d90658 is in state STARTED 2025-07-06 20:06:12.446905 | orchestrator | 2025-07-06 20:06:12 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:12.447264 | orchestrator | 2025-07-06 20:06:12 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:12.449154 | orchestrator | 2025-07-06 20:06:12 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:12.449690 | orchestrator | 2025-07-06 20:06:12 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:12.451037 | orchestrator | 2025-07-06 20:06:12 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:12.451462 | orchestrator | 2025-07-06 20:06:12 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:12.451489 | orchestrator | 2025-07-06 20:06:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:15.498691 | orchestrator | 2025-07-06 20:06:15 | INFO  | Task cf6788d9-4707-4857-ab95-e34460d90658 is in state STARTED 2025-07-06 20:06:15.507944 | orchestrator | 2025-07-06 20:06:15 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:15.508048 | orchestrator | 2025-07-06 20:06:15 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:15.508065 | orchestrator | 2025-07-06 20:06:15 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:15.515676 | orchestrator | 2025-07-06 20:06:15 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:15.515736 | orchestrator | 2025-07-06 20:06:15 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:15.522167 | orchestrator | 2025-07-06 20:06:15 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:15.522206 | orchestrator | 2025-07-06 20:06:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:18.566861 | orchestrator | 2025-07-06 20:06:18 | INFO  | Task cf6788d9-4707-4857-ab95-e34460d90658 is in state STARTED 2025-07-06 20:06:18.571793 | orchestrator | 2025-07-06 20:06:18 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:18.574579 | orchestrator | 2025-07-06 20:06:18 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:18.574625 | orchestrator | 2025-07-06 20:06:18 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:18.574639 | orchestrator | 2025-07-06 20:06:18 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:18.574651 | orchestrator | 2025-07-06 20:06:18 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:18.579607 | orchestrator | 2025-07-06 20:06:18 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:18.579645 | orchestrator | 2025-07-06 20:06:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:21.655853 | orchestrator | 2025-07-06 20:06:21 | INFO  | Task cf6788d9-4707-4857-ab95-e34460d90658 is in state STARTED 2025-07-06 20:06:21.655956 | orchestrator | 2025-07-06 20:06:21 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:21.658633 | orchestrator | 2025-07-06 20:06:21 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:21.664064 | orchestrator | 2025-07-06 20:06:21 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:21.668611 | orchestrator | 2025-07-06 20:06:21 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:21.671877 | orchestrator | 2025-07-06 20:06:21 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:21.673872 | orchestrator | 2025-07-06 20:06:21 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:21.673896 | orchestrator | 2025-07-06 20:06:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:24.749331 | orchestrator | 2025-07-06 20:06:24 | INFO  | Task cf6788d9-4707-4857-ab95-e34460d90658 is in state STARTED 2025-07-06 20:06:24.754605 | orchestrator | 2025-07-06 20:06:24 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:24.757121 | orchestrator | 2025-07-06 20:06:24 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:24.765380 | orchestrator | 2025-07-06 20:06:24 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:24.775682 | orchestrator | 2025-07-06 20:06:24 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:24.785618 | orchestrator | 2025-07-06 20:06:24 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:24.785668 | orchestrator | 2025-07-06 20:06:24 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:24.785681 | orchestrator | 2025-07-06 20:06:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:27.893861 | orchestrator | 2025-07-06 20:06:27 | INFO  | Task cf6788d9-4707-4857-ab95-e34460d90658 is in state STARTED 2025-07-06 20:06:27.913829 | orchestrator | 2025-07-06 20:06:27 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:27.913937 | orchestrator | 2025-07-06 20:06:27 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:27.913963 | orchestrator | 2025-07-06 20:06:27 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:27.913982 | orchestrator | 2025-07-06 20:06:27 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:27.914002 | orchestrator | 2025-07-06 20:06:27 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:27.914082 | orchestrator | 2025-07-06 20:06:27 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:27.914108 | orchestrator | 2025-07-06 20:06:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:31.014157 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task cf6788d9-4707-4857-ab95-e34460d90658 is in state SUCCESS 2025-07-06 20:06:31.018401 | orchestrator | 2025-07-06 20:06:31.018466 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-07-06 20:06:31.018482 | orchestrator | 2025-07-06 20:06:31.018494 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-07-06 20:06:31.018507 | orchestrator | Sunday 06 July 2025 20:06:12 +0000 (0:00:00.281) 0:00:00.281 *********** 2025-07-06 20:06:31.018518 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:31.018530 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:31.018541 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:31.018552 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:31.018563 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:31.018574 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:31.018585 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:31.018596 | orchestrator | 2025-07-06 20:06:31.018607 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-07-06 20:06:31.018618 | orchestrator | Sunday 06 July 2025 20:06:16 +0000 (0:00:03.731) 0:00:04.012 *********** 2025-07-06 20:06:31.018629 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-06 20:06:31.018640 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-06 20:06:31.018651 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-06 20:06:31.018671 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-06 20:06:31.018684 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-06 20:06:31.018696 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-06 20:06:31.018764 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-06 20:06:31.018786 | orchestrator | 2025-07-06 20:06:31.018806 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-07-06 20:06:31.018824 | orchestrator | Sunday 06 July 2025 20:06:17 +0000 (0:00:01.752) 0:00:05.765 *********** 2025-07-06 20:06:31.018846 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:06:16.767974', 'end': '2025-07-06 20:06:16.773743', 'delta': '0:00:00.005769', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:06:31.018880 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:06:16.762353', 'end': '2025-07-06 20:06:16.770714', 'delta': '0:00:00.008361', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:06:31.018901 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:06:16.944423', 'end': '2025-07-06 20:06:16.953421', 'delta': '0:00:00.008998', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:06:31.018946 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:06:16.924302', 'end': '2025-07-06 20:06:16.934981', 'delta': '0:00:00.010679', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:06:31.018959 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:06:17.194889', 'end': '2025-07-06 20:06:17.204582', 'delta': '0:00:00.009693', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:06:31.018982 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:06:17.325179', 'end': '2025-07-06 20:06:17.337030', 'delta': '0:00:00.011851', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:06:31.018994 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-06 20:06:17.518039', 'end': '2025-07-06 20:06:17.526836', 'delta': '0:00:00.008797', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-06 20:06:31.019005 | orchestrator | 2025-07-06 20:06:31.019016 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-07-06 20:06:31.019027 | orchestrator | Sunday 06 July 2025 20:06:20 +0000 (0:00:02.519) 0:00:08.284 *********** 2025-07-06 20:06:31.019037 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-06 20:06:31.019048 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-06 20:06:31.019059 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-06 20:06:31.019070 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-06 20:06:31.019080 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-06 20:06:31.019091 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-06 20:06:31.019101 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-06 20:06:31.019112 | orchestrator | 2025-07-06 20:06:31.019123 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-07-06 20:06:31.019133 | orchestrator | Sunday 06 July 2025 20:06:22 +0000 (0:00:02.625) 0:00:10.909 *********** 2025-07-06 20:06:31.019144 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-07-06 20:06:31.019155 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-07-06 20:06:31.019165 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-07-06 20:06:31.019176 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-07-06 20:06:31.019186 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-07-06 20:06:31.019202 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-07-06 20:06:31.019213 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-07-06 20:06:31.019224 | orchestrator | 2025-07-06 20:06:31.019235 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:06:31.019267 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:31.019280 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:31.019291 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:31.019302 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:31.019313 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:31.019323 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:31.019334 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:31.019345 | orchestrator | 2025-07-06 20:06:31.019356 | orchestrator | 2025-07-06 20:06:31.019367 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:06:31.019378 | orchestrator | Sunday 06 July 2025 20:06:27 +0000 (0:00:04.889) 0:00:15.798 *********** 2025-07-06 20:06:31.019389 | orchestrator | =============================================================================== 2025-07-06 20:06:31.019400 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.89s 2025-07-06 20:06:31.019410 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.73s 2025-07-06 20:06:31.019421 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.63s 2025-07-06 20:06:31.019432 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.52s 2025-07-06 20:06:31.019443 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.75s 2025-07-06 20:06:31.019454 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:31.022079 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:06:31.027087 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:31.031987 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:31.036431 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:31.042091 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:31.045543 | orchestrator | 2025-07-06 20:06:31 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:31.045579 | orchestrator | 2025-07-06 20:06:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:34.106243 | orchestrator | 2025-07-06 20:06:34 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:34.106414 | orchestrator | 2025-07-06 20:06:34 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:06:34.108779 | orchestrator | 2025-07-06 20:06:34 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:34.110226 | orchestrator | 2025-07-06 20:06:34 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:34.111333 | orchestrator | 2025-07-06 20:06:34 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:34.114873 | orchestrator | 2025-07-06 20:06:34 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:34.115755 | orchestrator | 2025-07-06 20:06:34 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:34.116041 | orchestrator | 2025-07-06 20:06:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:37.169554 | orchestrator | 2025-07-06 20:06:37 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:37.171617 | orchestrator | 2025-07-06 20:06:37 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:06:37.178112 | orchestrator | 2025-07-06 20:06:37 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:37.178474 | orchestrator | 2025-07-06 20:06:37 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:37.179159 | orchestrator | 2025-07-06 20:06:37 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:37.181261 | orchestrator | 2025-07-06 20:06:37 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:37.183792 | orchestrator | 2025-07-06 20:06:37 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:37.183823 | orchestrator | 2025-07-06 20:06:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:40.219379 | orchestrator | 2025-07-06 20:06:40 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:40.233546 | orchestrator | 2025-07-06 20:06:40 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:06:40.233927 | orchestrator | 2025-07-06 20:06:40 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:40.234304 | orchestrator | 2025-07-06 20:06:40 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:40.235066 | orchestrator | 2025-07-06 20:06:40 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:40.235613 | orchestrator | 2025-07-06 20:06:40 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:40.236436 | orchestrator | 2025-07-06 20:06:40 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state STARTED 2025-07-06 20:06:40.236465 | orchestrator | 2025-07-06 20:06:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:43.304862 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:43.307983 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:06:43.311238 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:43.314202 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:43.317791 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:43.321614 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task 6477b599-2e8b-4184-84cf-562df1af0a57 is in state STARTED 2025-07-06 20:06:43.328878 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:43.333744 | orchestrator | 2025-07-06 20:06:43.333800 | orchestrator | 2025-07-06 20:06:43.333815 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-07-06 20:06:43.333827 | orchestrator | 2025-07-06 20:06:43.333839 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-07-06 20:06:43.333877 | orchestrator | Sunday 06 July 2025 20:06:06 +0000 (0:00:00.124) 0:00:00.124 *********** 2025-07-06 20:06:43.333889 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:06:43.333902 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:06:43.333912 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:06:43.333923 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:43.333934 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:43.333944 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:43.333955 | orchestrator | 2025-07-06 20:06:43.333966 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-07-06 20:06:43.333977 | orchestrator | Sunday 06 July 2025 20:06:06 +0000 (0:00:00.613) 0:00:00.738 *********** 2025-07-06 20:06:43.333988 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:43.333999 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:43.334010 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:43.334074 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:43.334086 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:43.334096 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:43.334107 | orchestrator | 2025-07-06 20:06:43.334118 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-07-06 20:06:43.334129 | orchestrator | Sunday 06 July 2025 20:06:07 +0000 (0:00:00.578) 0:00:01.316 *********** 2025-07-06 20:06:43.334140 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:43.334151 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:43.334162 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:43.334173 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:43.334184 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:43.334195 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:43.334206 | orchestrator | 2025-07-06 20:06:43.334217 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-07-06 20:06:43.334227 | orchestrator | Sunday 06 July 2025 20:06:08 +0000 (0:00:00.641) 0:00:01.957 *********** 2025-07-06 20:06:43.334239 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:43.334249 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:43.334260 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:43.334270 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:43.334281 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:43.334321 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:43.334335 | orchestrator | 2025-07-06 20:06:43.334347 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-07-06 20:06:43.334375 | orchestrator | Sunday 06 July 2025 20:06:10 +0000 (0:00:01.964) 0:00:03.921 *********** 2025-07-06 20:06:43.334388 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:43.334402 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:43.334414 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:43.334426 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:43.334439 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:43.334451 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:43.334464 | orchestrator | 2025-07-06 20:06:43.334476 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-07-06 20:06:43.334489 | orchestrator | Sunday 06 July 2025 20:06:11 +0000 (0:00:01.222) 0:00:05.143 *********** 2025-07-06 20:06:43.334502 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:43.334514 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:43.334527 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:43.334540 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:43.334551 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:43.334563 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:43.334576 | orchestrator | 2025-07-06 20:06:43.334588 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-07-06 20:06:43.334602 | orchestrator | Sunday 06 July 2025 20:06:13 +0000 (0:00:01.918) 0:00:07.062 *********** 2025-07-06 20:06:43.334615 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:43.334637 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:43.334648 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:43.334658 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:43.334669 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:43.334680 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:43.334691 | orchestrator | 2025-07-06 20:06:43.334724 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-07-06 20:06:43.334735 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:00.874) 0:00:07.937 *********** 2025-07-06 20:06:43.334746 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:43.334757 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:43.334768 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:43.334778 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:43.334789 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:43.334800 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:43.334810 | orchestrator | 2025-07-06 20:06:43.334821 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-07-06 20:06:43.334833 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:00.574) 0:00:08.511 *********** 2025-07-06 20:06:43.334843 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:43.334854 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:43.334865 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:43.334876 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:43.334887 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:43.334897 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:43.334908 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:43.334919 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:43.334930 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:43.334941 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:43.334967 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:43.334979 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:43.334990 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:43.335001 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:43.335012 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:43.335023 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:06:43.335034 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:06:43.335044 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:43.335055 | orchestrator | 2025-07-06 20:06:43.335066 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-07-06 20:06:43.335077 | orchestrator | Sunday 06 July 2025 20:06:15 +0000 (0:00:00.847) 0:00:09.359 *********** 2025-07-06 20:06:43.335088 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:43.335099 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:43.335110 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:43.335120 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:43.335131 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:43.335142 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:43.335153 | orchestrator | 2025-07-06 20:06:43.335164 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-07-06 20:06:43.335176 | orchestrator | Sunday 06 July 2025 20:06:16 +0000 (0:00:01.348) 0:00:10.707 *********** 2025-07-06 20:06:43.335187 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:06:43.335198 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:06:43.335209 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:06:43.335226 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:43.335237 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:43.335248 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:43.335259 | orchestrator | 2025-07-06 20:06:43.335270 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-07-06 20:06:43.335281 | orchestrator | Sunday 06 July 2025 20:06:17 +0000 (0:00:00.493) 0:00:11.201 *********** 2025-07-06 20:06:43.335292 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:43.335303 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:43.335313 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:43.335324 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:43.335335 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:43.335345 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:43.335356 | orchestrator | 2025-07-06 20:06:43.335367 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-07-06 20:06:43.335378 | orchestrator | Sunday 06 July 2025 20:06:23 +0000 (0:00:06.025) 0:00:17.227 *********** 2025-07-06 20:06:43.335389 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:43.335400 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:43.335411 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:43.335421 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:43.335432 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:43.335443 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:43.335454 | orchestrator | 2025-07-06 20:06:43.335465 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-07-06 20:06:43.335476 | orchestrator | Sunday 06 July 2025 20:06:24 +0000 (0:00:01.416) 0:00:18.643 *********** 2025-07-06 20:06:43.335487 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:06:43.335497 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:06:43.335508 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:06:43.335519 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:43.335529 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:43.335540 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:43.335551 | orchestrator | 2025-07-06 20:06:43.335562 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-07-06 20:06:43.335575 | orchestrator | Sunday 06 July 2025 20:06:27 +0000 (0:00:02.425) 0:00:21.069 *********** 2025-07-06 20:06:43.335586 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:06:43.335596 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:06:43.335607 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:06:43.335618 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:43.335629 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:43.335640 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:43.335650 | orchestrator | 2025-07-06 20:06:43.335661 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-07-06 20:06:43.335672 | orchestrator | Sunday 06 July 2025 20:06:28 +0000 (0:00:00.906) 0:00:21.976 *********** 2025-07-06 20:06:43.335683 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-07-06 20:06:43.335694 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-07-06 20:06:43.335722 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-07-06 20:06:43.335733 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-07-06 20:06:43.335743 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-07-06 20:06:43.335754 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-07-06 20:06:43.335765 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-07-06 20:06:43.335776 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-07-06 20:06:43.335786 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-07-06 20:06:43.335797 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-07-06 20:06:43.335808 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-07-06 20:06:43.335818 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-07-06 20:06:43.335836 | orchestrator | 2025-07-06 20:06:43.335847 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-07-06 20:06:43.335858 | orchestrator | Sunday 06 July 2025 20:06:30 +0000 (0:00:02.831) 0:00:24.807 *********** 2025-07-06 20:06:43.335869 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:06:43.335880 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:06:43.335890 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:06:43.335901 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:06:43.335912 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:06:43.335923 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:06:43.335933 | orchestrator | 2025-07-06 20:06:43.336255 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-07-06 20:06:43.336269 | orchestrator | 2025-07-06 20:06:43.336280 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-07-06 20:06:43.336291 | orchestrator | Sunday 06 July 2025 20:06:33 +0000 (0:00:02.302) 0:00:27.110 *********** 2025-07-06 20:06:43.336302 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:43.336313 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:43.336323 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:43.336334 | orchestrator | 2025-07-06 20:06:43.336345 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-07-06 20:06:43.336356 | orchestrator | Sunday 06 July 2025 20:06:34 +0000 (0:00:01.241) 0:00:28.351 *********** 2025-07-06 20:06:43.336367 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:43.336377 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:43.336388 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:43.336399 | orchestrator | 2025-07-06 20:06:43.336409 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-07-06 20:06:43.336420 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:01.321) 0:00:29.672 *********** 2025-07-06 20:06:43.336431 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:43.336442 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:43.336453 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:43.336463 | orchestrator | 2025-07-06 20:06:43.336474 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-07-06 20:06:43.336485 | orchestrator | Sunday 06 July 2025 20:06:36 +0000 (0:00:01.076) 0:00:30.749 *********** 2025-07-06 20:06:43.336496 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:43.336507 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:43.336517 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:43.336528 | orchestrator | 2025-07-06 20:06:43.336539 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-07-06 20:06:43.336550 | orchestrator | Sunday 06 July 2025 20:06:37 +0000 (0:00:00.804) 0:00:31.554 *********** 2025-07-06 20:06:43.336560 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:06:43.336571 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:06:43.336582 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:06:43.336593 | orchestrator | 2025-07-06 20:06:43.336603 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-07-06 20:06:43.336614 | orchestrator | Sunday 06 July 2025 20:06:38 +0000 (0:00:00.428) 0:00:31.983 *********** 2025-07-06 20:06:43.336625 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:06:43.336635 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:06:43.336646 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:06:43.336657 | orchestrator | 2025-07-06 20:06:43.336667 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-07-06 20:06:43.336678 | orchestrator | Sunday 06 July 2025 20:06:39 +0000 (0:00:00.922) 0:00:32.906 *********** 2025-07-06 20:06:43.336689 | orchestrator | ERROR! The requested handler 'restart k3s' was not found in either the main handlers list nor in the listening handlers list 2025-07-06 20:06:43.336768 | orchestrator | 2025-07-06 20:06:43 | INFO  | Task 152bc41e-5e99-44d2-9430-511a715380cb is in state SUCCESS 2025-07-06 20:06:43.336780 | orchestrator | 2025-07-06 20:06:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:46.391259 | orchestrator | 2025-07-06 20:06:46 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:46.397826 | orchestrator | 2025-07-06 20:06:46 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:06:46.400827 | orchestrator | 2025-07-06 20:06:46 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:46.405844 | orchestrator | 2025-07-06 20:06:46 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:46.407113 | orchestrator | 2025-07-06 20:06:46 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:46.408028 | orchestrator | 2025-07-06 20:06:46 | INFO  | Task 6477b599-2e8b-4184-84cf-562df1af0a57 is in state STARTED 2025-07-06 20:06:46.410379 | orchestrator | 2025-07-06 20:06:46 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:46.414071 | orchestrator | 2025-07-06 20:06:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:49.468554 | orchestrator | 2025-07-06 20:06:49 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:49.472493 | orchestrator | 2025-07-06 20:06:49 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:06:49.474208 | orchestrator | 2025-07-06 20:06:49 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state STARTED 2025-07-06 20:06:49.478591 | orchestrator | 2025-07-06 20:06:49 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:49.479061 | orchestrator | 2025-07-06 20:06:49 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:49.481969 | orchestrator | 2025-07-06 20:06:49 | INFO  | Task 6477b599-2e8b-4184-84cf-562df1af0a57 is in state STARTED 2025-07-06 20:06:49.482902 | orchestrator | 2025-07-06 20:06:49 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:49.483123 | orchestrator | 2025-07-06 20:06:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:52.559179 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:52.562658 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:06:52.562751 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task 91d93b42-4dfe-45b5-9db7-d8368e2e01ab is in state SUCCESS 2025-07-06 20:06:52.562765 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:52.565613 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task 664c5536-66a7-48d9-b908-16617cfdb15c is in state STARTED 2025-07-06 20:06:52.567802 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:52.567993 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task 6477b599-2e8b-4184-84cf-562df1af0a57 is in state SUCCESS 2025-07-06 20:06:52.572791 | orchestrator | 2025-07-06 20:06:52 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:52.572844 | orchestrator | 2025-07-06 20:06:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:55.615327 | orchestrator | 2025-07-06 20:06:55 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:55.616148 | orchestrator | 2025-07-06 20:06:55 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:06:55.616185 | orchestrator | 2025-07-06 20:06:55 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:55.616470 | orchestrator | 2025-07-06 20:06:55 | INFO  | Task 664c5536-66a7-48d9-b908-16617cfdb15c is in state STARTED 2025-07-06 20:06:55.617094 | orchestrator | 2025-07-06 20:06:55 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state STARTED 2025-07-06 20:06:55.622339 | orchestrator | 2025-07-06 20:06:55 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:55.622393 | orchestrator | 2025-07-06 20:06:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:06:58.675092 | orchestrator | 2025-07-06 20:06:58 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:06:58.677479 | orchestrator | 2025-07-06 20:06:58 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:06:58.677527 | orchestrator | 2025-07-06 20:06:58 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:06:58.678653 | orchestrator | 2025-07-06 20:06:58 | INFO  | Task 664c5536-66a7-48d9-b908-16617cfdb15c is in state SUCCESS 2025-07-06 20:06:58.678926 | orchestrator | 2025-07-06 20:06:58 | INFO  | Task 651cb631-a207-479c-ae69-df0ab6a4b240 is in state SUCCESS 2025-07-06 20:06:58.679955 | orchestrator | 2025-07-06 20:06:58.679986 | orchestrator | 2025-07-06 20:06:58.680005 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-07-06 20:06:58.680024 | orchestrator | 2025-07-06 20:06:58.680041 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-07-06 20:06:58.680058 | orchestrator | Sunday 06 July 2025 20:06:13 +0000 (0:00:00.520) 0:00:00.520 *********** 2025-07-06 20:06:58.680072 | orchestrator | ok: [testbed-manager] => { 2025-07-06 20:06:58.680088 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-07-06 20:06:58.680103 | orchestrator | } 2025-07-06 20:06:58.680117 | orchestrator | 2025-07-06 20:06:58.680130 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-07-06 20:06:58.680144 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:00.483) 0:00:01.003 *********** 2025-07-06 20:06:58.680157 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:58.680171 | orchestrator | 2025-07-06 20:06:58.680184 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-07-06 20:06:58.680198 | orchestrator | Sunday 06 July 2025 20:06:15 +0000 (0:00:01.598) 0:00:02.601 *********** 2025-07-06 20:06:58.680211 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-07-06 20:06:58.680225 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-07-06 20:06:58.680237 | orchestrator | 2025-07-06 20:06:58.680250 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-07-06 20:06:58.680263 | orchestrator | Sunday 06 July 2025 20:06:17 +0000 (0:00:01.169) 0:00:03.770 *********** 2025-07-06 20:06:58.680415 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:58.680504 | orchestrator | 2025-07-06 20:06:58.680514 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-07-06 20:06:58.680523 | orchestrator | Sunday 06 July 2025 20:06:19 +0000 (0:00:02.541) 0:00:06.312 *********** 2025-07-06 20:06:58.680531 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:58.680538 | orchestrator | 2025-07-06 20:06:58.680556 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-07-06 20:06:58.680564 | orchestrator | Sunday 06 July 2025 20:06:21 +0000 (0:00:01.967) 0:00:08.280 *********** 2025-07-06 20:06:58.680572 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-07-06 20:06:58.680580 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:58.680588 | orchestrator | 2025-07-06 20:06:58.680596 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-07-06 20:06:58.680604 | orchestrator | Sunday 06 July 2025 20:06:45 +0000 (0:00:24.240) 0:00:32.520 *********** 2025-07-06 20:06:58.680630 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:58.680638 | orchestrator | 2025-07-06 20:06:58.680646 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:06:58.680654 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:58.680664 | orchestrator | 2025-07-06 20:06:58.680671 | orchestrator | 2025-07-06 20:06:58.680722 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:06:58.680730 | orchestrator | Sunday 06 July 2025 20:06:48 +0000 (0:00:02.750) 0:00:35.270 *********** 2025-07-06 20:06:58.680738 | orchestrator | =============================================================================== 2025-07-06 20:06:58.680746 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.24s 2025-07-06 20:06:58.680754 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.75s 2025-07-06 20:06:58.680762 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.54s 2025-07-06 20:06:58.680770 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.97s 2025-07-06 20:06:58.680778 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.60s 2025-07-06 20:06:58.680786 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.17s 2025-07-06 20:06:58.680794 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.48s 2025-07-06 20:06:58.680801 | orchestrator | 2025-07-06 20:06:58.680809 | orchestrator | 2025-07-06 20:06:58.680817 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-06 20:06:58.680825 | orchestrator | 2025-07-06 20:06:58.680833 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-06 20:06:58.680840 | orchestrator | Sunday 06 July 2025 20:06:46 +0000 (0:00:00.262) 0:00:00.262 *********** 2025-07-06 20:06:58.680848 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:58.680856 | orchestrator | 2025-07-06 20:06:58.680864 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-06 20:06:58.680872 | orchestrator | Sunday 06 July 2025 20:06:47 +0000 (0:00:01.535) 0:00:01.798 *********** 2025-07-06 20:06:58.680879 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:58.680887 | orchestrator | 2025-07-06 20:06:58.680895 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-06 20:06:58.680903 | orchestrator | Sunday 06 July 2025 20:06:48 +0000 (0:00:00.900) 0:00:02.698 *********** 2025-07-06 20:06:58.680911 | orchestrator | fatal: [testbed-manager -> testbed-node-0(192.168.16.10)]: FAILED! => {"changed": false, "msg": "file not found: /etc/rancher/k3s/k3s.yaml"} 2025-07-06 20:06:58.680919 | orchestrator | 2025-07-06 20:06:58.680927 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:06:58.680935 | orchestrator | testbed-manager : ok=2  changed=1  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-06 20:06:58.680943 | orchestrator | 2025-07-06 20:06:58.680951 | orchestrator | 2025-07-06 20:06:58.680958 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:06:58.680966 | orchestrator | Sunday 06 July 2025 20:06:49 +0000 (0:00:01.007) 0:00:03.706 *********** 2025-07-06 20:06:58.680986 | orchestrator | =============================================================================== 2025-07-06 20:06:58.680995 | orchestrator | Get home directory of operator user ------------------------------------- 1.53s 2025-07-06 20:06:58.681003 | orchestrator | Get kubeconfig file ----------------------------------------------------- 1.01s 2025-07-06 20:06:58.681011 | orchestrator | Create .kube directory -------------------------------------------------- 0.90s 2025-07-06 20:06:58.681019 | orchestrator | 2025-07-06 20:06:58.681026 | orchestrator | 2025-07-06 20:06:58.681034 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-07-06 20:06:58.681042 | orchestrator | 2025-07-06 20:06:58.681050 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-06 20:06:58.681065 | orchestrator | Sunday 06 July 2025 20:06:55 +0000 (0:00:00.175) 0:00:00.175 *********** 2025-07-06 20:06:58.681073 | orchestrator | fatal: [testbed-manager -> testbed-node-0(192.168.16.10)]: FAILED! => {"changed": false, "msg": "file not found: /etc/rancher/k3s/k3s.yaml"} 2025-07-06 20:06:58.681081 | orchestrator | 2025-07-06 20:06:58.681089 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:06:58.681098 | orchestrator | testbed-manager : ok=0 changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-06 20:06:58.681106 | orchestrator | 2025-07-06 20:06:58.681113 | orchestrator | 2025-07-06 20:06:58.681121 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:06:58.681129 | orchestrator | Sunday 06 July 2025 20:06:56 +0000 (0:00:00.841) 0:00:01.017 *********** 2025-07-06 20:06:58.681137 | orchestrator | =============================================================================== 2025-07-06 20:06:58.681147 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.84s 2025-07-06 20:06:58.681156 | orchestrator | 2025-07-06 20:06:58.681166 | orchestrator | 2025-07-06 20:06:58.681176 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-07-06 20:06:58.681184 | orchestrator | 2025-07-06 20:06:58.681198 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-07-06 20:06:58.681208 | orchestrator | Sunday 06 July 2025 20:06:12 +0000 (0:00:00.622) 0:00:00.622 *********** 2025-07-06 20:06:58.681217 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-07-06 20:06:58.681228 | orchestrator | 2025-07-06 20:06:58.681237 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-07-06 20:06:58.681246 | orchestrator | Sunday 06 July 2025 20:06:12 +0000 (0:00:00.475) 0:00:01.097 *********** 2025-07-06 20:06:58.681255 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-07-06 20:06:58.681265 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-07-06 20:06:58.681274 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-07-06 20:06:58.681283 | orchestrator | 2025-07-06 20:06:58.681292 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-07-06 20:06:58.681302 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:01.746) 0:00:02.844 *********** 2025-07-06 20:06:58.681311 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:58.681320 | orchestrator | 2025-07-06 20:06:58.681330 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-07-06 20:06:58.681339 | orchestrator | Sunday 06 July 2025 20:06:16 +0000 (0:00:02.123) 0:00:04.968 *********** 2025-07-06 20:06:58.681348 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-07-06 20:06:58.681357 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:58.681366 | orchestrator | 2025-07-06 20:06:58.681375 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-07-06 20:06:58.681384 | orchestrator | Sunday 06 July 2025 20:06:52 +0000 (0:00:36.091) 0:00:41.060 *********** 2025-07-06 20:06:58.681393 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:58.681402 | orchestrator | 2025-07-06 20:06:58.681411 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-07-06 20:06:58.681421 | orchestrator | Sunday 06 July 2025 20:06:53 +0000 (0:00:00.884) 0:00:41.944 *********** 2025-07-06 20:06:58.681430 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:58.681439 | orchestrator | 2025-07-06 20:06:58.681448 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-07-06 20:06:58.681458 | orchestrator | Sunday 06 July 2025 20:06:54 +0000 (0:00:00.760) 0:00:42.704 *********** 2025-07-06 20:06:58.681467 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:58.681476 | orchestrator | 2025-07-06 20:06:58.681485 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-07-06 20:06:58.681501 | orchestrator | Sunday 06 July 2025 20:06:55 +0000 (0:00:01.341) 0:00:44.046 *********** 2025-07-06 20:06:58.681510 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:58.681517 | orchestrator | 2025-07-06 20:06:58.681525 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-07-06 20:06:58.681533 | orchestrator | Sunday 06 July 2025 20:06:56 +0000 (0:00:00.871) 0:00:44.918 *********** 2025-07-06 20:06:58.681541 | orchestrator | changed: [testbed-manager] 2025-07-06 20:06:58.681549 | orchestrator | 2025-07-06 20:06:58.681556 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-07-06 20:06:58.681564 | orchestrator | Sunday 06 July 2025 20:06:57 +0000 (0:00:00.712) 0:00:45.630 *********** 2025-07-06 20:06:58.681572 | orchestrator | ok: [testbed-manager] 2025-07-06 20:06:58.681579 | orchestrator | 2025-07-06 20:06:58.681587 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:06:58.681595 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:06:58.681603 | orchestrator | 2025-07-06 20:06:58.681611 | orchestrator | 2025-07-06 20:06:58.681619 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:06:58.681631 | orchestrator | Sunday 06 July 2025 20:06:57 +0000 (0:00:00.314) 0:00:45.944 *********** 2025-07-06 20:06:58.681639 | orchestrator | =============================================================================== 2025-07-06 20:06:58.681647 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.09s 2025-07-06 20:06:58.681654 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.12s 2025-07-06 20:06:58.681662 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.75s 2025-07-06 20:06:58.681670 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.34s 2025-07-06 20:06:58.681835 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.88s 2025-07-06 20:06:58.681849 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.87s 2025-07-06 20:06:58.681857 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.76s 2025-07-06 20:06:58.681864 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.71s 2025-07-06 20:06:58.681885 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.48s 2025-07-06 20:06:58.681894 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.31s 2025-07-06 20:06:58.681910 | orchestrator | 2025-07-06 20:06:58 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:06:58.681919 | orchestrator | 2025-07-06 20:06:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:01.724295 | orchestrator | 2025-07-06 20:07:01 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:07:01.726263 | orchestrator | 2025-07-06 20:07:01 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:01.726950 | orchestrator | 2025-07-06 20:07:01 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:01.735320 | orchestrator | 2025-07-06 20:07:01 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:01.735364 | orchestrator | 2025-07-06 20:07:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:04.774798 | orchestrator | 2025-07-06 20:07:04 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:07:04.775020 | orchestrator | 2025-07-06 20:07:04 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:04.777522 | orchestrator | 2025-07-06 20:07:04 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:04.777578 | orchestrator | 2025-07-06 20:07:04 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:04.777591 | orchestrator | 2025-07-06 20:07:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:07.816901 | orchestrator | 2025-07-06 20:07:07 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:07:07.817633 | orchestrator | 2025-07-06 20:07:07 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:07.820891 | orchestrator | 2025-07-06 20:07:07 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:07.824198 | orchestrator | 2025-07-06 20:07:07 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:07.824235 | orchestrator | 2025-07-06 20:07:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:10.882905 | orchestrator | 2025-07-06 20:07:10 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:07:10.889556 | orchestrator | 2025-07-06 20:07:10 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:10.895408 | orchestrator | 2025-07-06 20:07:10 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:10.897734 | orchestrator | 2025-07-06 20:07:10 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:10.900694 | orchestrator | 2025-07-06 20:07:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:13.956025 | orchestrator | 2025-07-06 20:07:13 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:07:13.957332 | orchestrator | 2025-07-06 20:07:13 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:13.957793 | orchestrator | 2025-07-06 20:07:13 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:13.957824 | orchestrator | 2025-07-06 20:07:13 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:13.957835 | orchestrator | 2025-07-06 20:07:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:17.023285 | orchestrator | 2025-07-06 20:07:17 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state STARTED 2025-07-06 20:07:17.028908 | orchestrator | 2025-07-06 20:07:17 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:17.031344 | orchestrator | 2025-07-06 20:07:17 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:17.034360 | orchestrator | 2025-07-06 20:07:17 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:17.034971 | orchestrator | 2025-07-06 20:07:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:20.072369 | orchestrator | 2025-07-06 20:07:20.072461 | orchestrator | 2025-07-06 20:07:20.072476 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:07:20.072489 | orchestrator | 2025-07-06 20:07:20.072500 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:07:20.072513 | orchestrator | Sunday 06 July 2025 20:06:12 +0000 (0:00:00.537) 0:00:00.537 *********** 2025-07-06 20:07:20.072525 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-07-06 20:07:20.072536 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-07-06 20:07:20.072547 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-07-06 20:07:20.072558 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-07-06 20:07:20.072569 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-07-06 20:07:20.072606 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-07-06 20:07:20.072618 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-07-06 20:07:20.072628 | orchestrator | 2025-07-06 20:07:20.072639 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-07-06 20:07:20.072701 | orchestrator | 2025-07-06 20:07:20.072726 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-07-06 20:07:20.072738 | orchestrator | Sunday 06 July 2025 20:06:15 +0000 (0:00:02.653) 0:00:03.191 *********** 2025-07-06 20:07:20.072763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:07:20.072783 | orchestrator | 2025-07-06 20:07:20.072794 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-07-06 20:07:20.072805 | orchestrator | Sunday 06 July 2025 20:06:17 +0000 (0:00:02.662) 0:00:05.853 *********** 2025-07-06 20:07:20.072816 | orchestrator | ok: [testbed-manager] 2025-07-06 20:07:20.072828 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:20.072839 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:20.072850 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:20.072861 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:07:20.072871 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:07:20.072882 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:07:20.072893 | orchestrator | 2025-07-06 20:07:20.072906 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-07-06 20:07:20.072919 | orchestrator | Sunday 06 July 2025 20:06:19 +0000 (0:00:01.985) 0:00:07.839 *********** 2025-07-06 20:07:20.072931 | orchestrator | ok: [testbed-manager] 2025-07-06 20:07:20.072944 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:20.072956 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:20.072969 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:20.072981 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:07:20.072994 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:07:20.073007 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:07:20.073019 | orchestrator | 2025-07-06 20:07:20.073032 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-07-06 20:07:20.073045 | orchestrator | Sunday 06 July 2025 20:06:23 +0000 (0:00:03.769) 0:00:11.609 *********** 2025-07-06 20:07:20.073058 | orchestrator | changed: [testbed-manager] 2025-07-06 20:07:20.073070 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:20.073083 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:20.073095 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:20.073107 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:07:20.073119 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:07:20.073132 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:07:20.073145 | orchestrator | 2025-07-06 20:07:20.073158 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-07-06 20:07:20.073170 | orchestrator | Sunday 06 July 2025 20:06:27 +0000 (0:00:04.143) 0:00:15.752 *********** 2025-07-06 20:07:20.073183 | orchestrator | changed: [testbed-manager] 2025-07-06 20:07:20.073195 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:20.073208 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:20.073221 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:20.073233 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:07:20.073246 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:07:20.073259 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:07:20.073272 | orchestrator | 2025-07-06 20:07:20.073283 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-07-06 20:07:20.073294 | orchestrator | Sunday 06 July 2025 20:06:38 +0000 (0:00:10.496) 0:00:26.249 *********** 2025-07-06 20:07:20.073305 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:20.073316 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:20.073327 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:07:20.073346 | orchestrator | changed: [testbed-manager] 2025-07-06 20:07:20.073357 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:07:20.073367 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:20.073378 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:07:20.073388 | orchestrator | 2025-07-06 20:07:20.073399 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-07-06 20:07:20.073410 | orchestrator | Sunday 06 July 2025 20:06:57 +0000 (0:00:19.543) 0:00:45.792 *********** 2025-07-06 20:07:20.073422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:07:20.073434 | orchestrator | 2025-07-06 20:07:20.073445 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-07-06 20:07:20.073456 | orchestrator | Sunday 06 July 2025 20:06:59 +0000 (0:00:01.176) 0:00:46.968 *********** 2025-07-06 20:07:20.073467 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-07-06 20:07:20.073478 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-07-06 20:07:20.073489 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-07-06 20:07:20.073500 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-07-06 20:07:20.073527 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-07-06 20:07:20.073538 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-07-06 20:07:20.073549 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-07-06 20:07:20.073560 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-07-06 20:07:20.073571 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-07-06 20:07:20.073582 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-07-06 20:07:20.073592 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-07-06 20:07:20.073603 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-07-06 20:07:20.073614 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-07-06 20:07:20.073624 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-07-06 20:07:20.073635 | orchestrator | 2025-07-06 20:07:20.073674 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-07-06 20:07:20.073696 | orchestrator | Sunday 06 July 2025 20:07:03 +0000 (0:00:04.334) 0:00:51.303 *********** 2025-07-06 20:07:20.073711 | orchestrator | ok: [testbed-manager] 2025-07-06 20:07:20.073722 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:20.073733 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:20.073744 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:20.073755 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:07:20.073765 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:07:20.073776 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:07:20.073786 | orchestrator | 2025-07-06 20:07:20.073797 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-07-06 20:07:20.073808 | orchestrator | Sunday 06 July 2025 20:07:04 +0000 (0:00:01.151) 0:00:52.454 *********** 2025-07-06 20:07:20.073818 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:20.073871 | orchestrator | changed: [testbed-manager] 2025-07-06 20:07:20.073883 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:20.073894 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:20.073905 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:07:20.073915 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:07:20.073926 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:07:20.073936 | orchestrator | 2025-07-06 20:07:20.073947 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-07-06 20:07:20.073958 | orchestrator | Sunday 06 July 2025 20:07:06 +0000 (0:00:01.762) 0:00:54.217 *********** 2025-07-06 20:07:20.073969 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:20.073980 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:20.073990 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:20.074009 | orchestrator | ok: [testbed-manager] 2025-07-06 20:07:20.074083 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:07:20.074095 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:07:20.074105 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:07:20.074116 | orchestrator | 2025-07-06 20:07:20.074127 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-07-06 20:07:20.074138 | orchestrator | Sunday 06 July 2025 20:07:08 +0000 (0:00:01.818) 0:00:56.035 *********** 2025-07-06 20:07:20.074149 | orchestrator | ok: [testbed-manager] 2025-07-06 20:07:20.074159 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:07:20.074170 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:07:20.074181 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:07:20.074191 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:07:20.074202 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:07:20.074213 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:07:20.074223 | orchestrator | 2025-07-06 20:07:20.074234 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-07-06 20:07:20.074245 | orchestrator | Sunday 06 July 2025 20:07:09 +0000 (0:00:01.848) 0:00:57.884 *********** 2025-07-06 20:07:20.074256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-07-06 20:07:20.074270 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:07:20.074281 | orchestrator | 2025-07-06 20:07:20.074292 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-07-06 20:07:20.074303 | orchestrator | Sunday 06 July 2025 20:07:12 +0000 (0:00:02.131) 0:01:00.015 *********** 2025-07-06 20:07:20.074314 | orchestrator | changed: [testbed-manager] 2025-07-06 20:07:20.074325 | orchestrator | 2025-07-06 20:07:20.074335 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-07-06 20:07:20.074346 | orchestrator | Sunday 06 July 2025 20:07:14 +0000 (0:00:02.514) 0:01:02.530 *********** 2025-07-06 20:07:20.074357 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:07:20.074502 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:07:20.074515 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:07:20.074526 | orchestrator | changed: [testbed-manager] 2025-07-06 20:07:20.074536 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:07:20.074547 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:07:20.074557 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:07:20.074568 | orchestrator | 2025-07-06 20:07:20.074579 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:07:20.074590 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:07:20.074602 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:07:20.074613 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:07:20.074624 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:07:20.074684 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:07:20.074698 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:07:20.074709 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:07:20.074720 | orchestrator | 2025-07-06 20:07:20.074731 | orchestrator | 2025-07-06 20:07:20.074752 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:07:20.074763 | orchestrator | Sunday 06 July 2025 20:07:18 +0000 (0:00:03.831) 0:01:06.361 *********** 2025-07-06 20:07:20.074774 | orchestrator | =============================================================================== 2025-07-06 20:07:20.074785 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.54s 2025-07-06 20:07:20.074796 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.50s 2025-07-06 20:07:20.074807 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.33s 2025-07-06 20:07:20.074824 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 4.14s 2025-07-06 20:07:20.074835 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.83s 2025-07-06 20:07:20.074846 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.77s 2025-07-06 20:07:20.074856 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.66s 2025-07-06 20:07:20.074867 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.65s 2025-07-06 20:07:20.074878 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.51s 2025-07-06 20:07:20.074889 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.13s 2025-07-06 20:07:20.074900 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.99s 2025-07-06 20:07:20.074910 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.85s 2025-07-06 20:07:20.074921 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.82s 2025-07-06 20:07:20.074932 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.76s 2025-07-06 20:07:20.074942 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.18s 2025-07-06 20:07:20.074953 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.15s 2025-07-06 20:07:20.074965 | orchestrator | 2025-07-06 20:07:20 | INFO  | Task c5b44aff-4167-4308-bb24-9bc5a2ae96d4 is in state SUCCESS 2025-07-06 20:07:20.074976 | orchestrator | 2025-07-06 20:07:20 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:20.076629 | orchestrator | 2025-07-06 20:07:20 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:20.078339 | orchestrator | 2025-07-06 20:07:20 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:20.078424 | orchestrator | 2025-07-06 20:07:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:23.125208 | orchestrator | 2025-07-06 20:07:23 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:23.125313 | orchestrator | 2025-07-06 20:07:23 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:23.126249 | orchestrator | 2025-07-06 20:07:23 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:23.126280 | orchestrator | 2025-07-06 20:07:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:26.165560 | orchestrator | 2025-07-06 20:07:26 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:26.166160 | orchestrator | 2025-07-06 20:07:26 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:26.167679 | orchestrator | 2025-07-06 20:07:26 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:26.167733 | orchestrator | 2025-07-06 20:07:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:29.212062 | orchestrator | 2025-07-06 20:07:29 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:29.213343 | orchestrator | 2025-07-06 20:07:29 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:29.215539 | orchestrator | 2025-07-06 20:07:29 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:29.215575 | orchestrator | 2025-07-06 20:07:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:32.253195 | orchestrator | 2025-07-06 20:07:32 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:32.255697 | orchestrator | 2025-07-06 20:07:32 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:32.255741 | orchestrator | 2025-07-06 20:07:32 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:32.255754 | orchestrator | 2025-07-06 20:07:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:35.293512 | orchestrator | 2025-07-06 20:07:35 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:35.296657 | orchestrator | 2025-07-06 20:07:35 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:35.299677 | orchestrator | 2025-07-06 20:07:35 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:35.299709 | orchestrator | 2025-07-06 20:07:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:38.348117 | orchestrator | 2025-07-06 20:07:38 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:38.349202 | orchestrator | 2025-07-06 20:07:38 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:38.351533 | orchestrator | 2025-07-06 20:07:38 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:38.351559 | orchestrator | 2025-07-06 20:07:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:41.400158 | orchestrator | 2025-07-06 20:07:41 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:41.401905 | orchestrator | 2025-07-06 20:07:41 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:41.402740 | orchestrator | 2025-07-06 20:07:41 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:41.403159 | orchestrator | 2025-07-06 20:07:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:44.459011 | orchestrator | 2025-07-06 20:07:44 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:44.461089 | orchestrator | 2025-07-06 20:07:44 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:44.463189 | orchestrator | 2025-07-06 20:07:44 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:44.463271 | orchestrator | 2025-07-06 20:07:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:47.521404 | orchestrator | 2025-07-06 20:07:47 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:47.524154 | orchestrator | 2025-07-06 20:07:47 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:47.526129 | orchestrator | 2025-07-06 20:07:47 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:47.526157 | orchestrator | 2025-07-06 20:07:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:50.564785 | orchestrator | 2025-07-06 20:07:50 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:50.566301 | orchestrator | 2025-07-06 20:07:50 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:50.568379 | orchestrator | 2025-07-06 20:07:50 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:50.568420 | orchestrator | 2025-07-06 20:07:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:53.619362 | orchestrator | 2025-07-06 20:07:53 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:53.620258 | orchestrator | 2025-07-06 20:07:53 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:53.621414 | orchestrator | 2025-07-06 20:07:53 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:53.621430 | orchestrator | 2025-07-06 20:07:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:56.673372 | orchestrator | 2025-07-06 20:07:56 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state STARTED 2025-07-06 20:07:56.677560 | orchestrator | 2025-07-06 20:07:56 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:56.683680 | orchestrator | 2025-07-06 20:07:56 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:56.683746 | orchestrator | 2025-07-06 20:07:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:07:59.717113 | orchestrator | 2025-07-06 20:07:59 | INFO  | Task c288be2b-4ea9-4507-8386-180f0422aca7 is in state SUCCESS 2025-07-06 20:07:59.717240 | orchestrator | 2025-07-06 20:07:59 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:07:59.717256 | orchestrator | 2025-07-06 20:07:59 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:07:59.717264 | orchestrator | 2025-07-06 20:07:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:02.763469 | orchestrator | 2025-07-06 20:08:02 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:02.765988 | orchestrator | 2025-07-06 20:08:02 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:02.766338 | orchestrator | 2025-07-06 20:08:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:05.817193 | orchestrator | 2025-07-06 20:08:05 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:05.818769 | orchestrator | 2025-07-06 20:08:05 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:05.818825 | orchestrator | 2025-07-06 20:08:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:08.866281 | orchestrator | 2025-07-06 20:08:08 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:08.868870 | orchestrator | 2025-07-06 20:08:08 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:08.868944 | orchestrator | 2025-07-06 20:08:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:11.904659 | orchestrator | 2025-07-06 20:08:11 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:11.905019 | orchestrator | 2025-07-06 20:08:11 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:11.905356 | orchestrator | 2025-07-06 20:08:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:14.946714 | orchestrator | 2025-07-06 20:08:14 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:14.948125 | orchestrator | 2025-07-06 20:08:14 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:14.948159 | orchestrator | 2025-07-06 20:08:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:17.993963 | orchestrator | 2025-07-06 20:08:17 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:17.994201 | orchestrator | 2025-07-06 20:08:17 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:17.994468 | orchestrator | 2025-07-06 20:08:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:21.035059 | orchestrator | 2025-07-06 20:08:21 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:21.036326 | orchestrator | 2025-07-06 20:08:21 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:21.036515 | orchestrator | 2025-07-06 20:08:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:24.083057 | orchestrator | 2025-07-06 20:08:24 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:24.085489 | orchestrator | 2025-07-06 20:08:24 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:24.085751 | orchestrator | 2025-07-06 20:08:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:27.129172 | orchestrator | 2025-07-06 20:08:27 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:27.131630 | orchestrator | 2025-07-06 20:08:27 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:27.131665 | orchestrator | 2025-07-06 20:08:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:30.185168 | orchestrator | 2025-07-06 20:08:30 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:30.187936 | orchestrator | 2025-07-06 20:08:30 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:30.189456 | orchestrator | 2025-07-06 20:08:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:33.232011 | orchestrator | 2025-07-06 20:08:33 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:33.234434 | orchestrator | 2025-07-06 20:08:33 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:33.234499 | orchestrator | 2025-07-06 20:08:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:36.265532 | orchestrator | 2025-07-06 20:08:36 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:36.265678 | orchestrator | 2025-07-06 20:08:36 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:36.265693 | orchestrator | 2025-07-06 20:08:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:39.317681 | orchestrator | 2025-07-06 20:08:39 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:39.319705 | orchestrator | 2025-07-06 20:08:39 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:39.319796 | orchestrator | 2025-07-06 20:08:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:42.362895 | orchestrator | 2025-07-06 20:08:42 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:42.364927 | orchestrator | 2025-07-06 20:08:42 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:42.364998 | orchestrator | 2025-07-06 20:08:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:45.406329 | orchestrator | 2025-07-06 20:08:45 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state STARTED 2025-07-06 20:08:45.406947 | orchestrator | 2025-07-06 20:08:45 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:45.407006 | orchestrator | 2025-07-06 20:08:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:48.441830 | orchestrator | 2025-07-06 20:08:48 | INFO  | Task e3a1ed4e-1d98-484c-802b-09edab268f81 is in state STARTED 2025-07-06 20:08:48.442102 | orchestrator | 2025-07-06 20:08:48 | INFO  | Task c9f134db-36eb-4823-9df6-61c9f4143b26 is in state STARTED 2025-07-06 20:08:48.444883 | orchestrator | 2025-07-06 20:08:48 | INFO  | Task 7a66c86e-0e32-456a-82fd-9550b414ecf4 is in state SUCCESS 2025-07-06 20:08:48.448667 | orchestrator | 2025-07-06 20:08:48.448771 | orchestrator | 2025-07-06 20:08:48.448787 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-07-06 20:08:48.448800 | orchestrator | 2025-07-06 20:08:48.448811 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-07-06 20:08:48.448823 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:00.289) 0:00:00.289 *********** 2025-07-06 20:08:48.448834 | orchestrator | ok: [testbed-manager] 2025-07-06 20:08:48.448846 | orchestrator | 2025-07-06 20:08:48.448858 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-07-06 20:08:48.448869 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:00.708) 0:00:00.998 *********** 2025-07-06 20:08:48.448880 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-07-06 20:08:48.448891 | orchestrator | 2025-07-06 20:08:48.448902 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-07-06 20:08:48.448913 | orchestrator | Sunday 06 July 2025 20:06:36 +0000 (0:00:00.624) 0:00:01.623 *********** 2025-07-06 20:08:48.448924 | orchestrator | changed: [testbed-manager] 2025-07-06 20:08:48.448935 | orchestrator | 2025-07-06 20:08:48.448946 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-07-06 20:08:48.448956 | orchestrator | Sunday 06 July 2025 20:06:37 +0000 (0:00:01.128) 0:00:02.752 *********** 2025-07-06 20:08:48.448967 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-07-06 20:08:48.448998 | orchestrator | ok: [testbed-manager] 2025-07-06 20:08:48.449010 | orchestrator | 2025-07-06 20:08:48.449021 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-07-06 20:08:48.449032 | orchestrator | Sunday 06 July 2025 20:07:42 +0000 (0:01:05.421) 0:01:08.174 *********** 2025-07-06 20:08:48.449043 | orchestrator | changed: [testbed-manager] 2025-07-06 20:08:48.449054 | orchestrator | 2025-07-06 20:08:48.449065 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:08:48.449077 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:08:48.449089 | orchestrator | 2025-07-06 20:08:48.449100 | orchestrator | 2025-07-06 20:08:48.449111 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:08:48.449122 | orchestrator | Sunday 06 July 2025 20:07:56 +0000 (0:00:14.009) 0:01:22.183 *********** 2025-07-06 20:08:48.449133 | orchestrator | =============================================================================== 2025-07-06 20:08:48.449143 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 65.42s 2025-07-06 20:08:48.449154 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 14.01s 2025-07-06 20:08:48.449165 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.13s 2025-07-06 20:08:48.449176 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.71s 2025-07-06 20:08:48.449186 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.63s 2025-07-06 20:08:48.449197 | orchestrator | 2025-07-06 20:08:48.449208 | orchestrator | 2025-07-06 20:08:48.449219 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-07-06 20:08:48.449230 | orchestrator | 2025-07-06 20:08:48.449241 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-06 20:08:48.449272 | orchestrator | Sunday 06 July 2025 20:06:06 +0000 (0:00:00.243) 0:00:00.243 *********** 2025-07-06 20:08:48.449283 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:08:48.449296 | orchestrator | 2025-07-06 20:08:48.449307 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-07-06 20:08:48.449317 | orchestrator | Sunday 06 July 2025 20:06:07 +0000 (0:00:01.149) 0:00:01.393 *********** 2025-07-06 20:08:48.449328 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:08:48.449339 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:08:48.449349 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:08:48.449360 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:08:48.449371 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:08:48.449381 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:08:48.449392 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:08:48.449404 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:08:48.449415 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:08:48.449437 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:08:48.449448 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:08:48.449460 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:08:48.449471 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-06 20:08:48.449482 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:08:48.449492 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:08:48.449503 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:08:48.449592 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:08:48.449608 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-06 20:08:48.449619 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:08:48.449630 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:08:48.449641 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-06 20:08:48.449652 | orchestrator | 2025-07-06 20:08:48.449662 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-06 20:08:48.449673 | orchestrator | Sunday 06 July 2025 20:06:11 +0000 (0:00:04.203) 0:00:05.596 *********** 2025-07-06 20:08:48.449684 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:08:48.449696 | orchestrator | 2025-07-06 20:08:48.449707 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-07-06 20:08:48.449718 | orchestrator | Sunday 06 July 2025 20:06:12 +0000 (0:00:01.072) 0:00:06.668 *********** 2025-07-06 20:08:48.449733 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.449759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.449771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.449782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.449794 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.449836 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.449850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.449862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.449880 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.449891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.449910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.449923 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.449952 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.449995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.450009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.450094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.450120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.450132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.450143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.450155 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.450171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.450182 | orchestrator | 2025-07-06 20:08:48.450193 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-07-06 20:08:48.450204 | orchestrator | Sunday 06 July 2025 20:06:17 +0000 (0:00:05.099) 0:00:11.768 *********** 2025-07-06 20:08:48.450250 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450264 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450283 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450329 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:08:48.450340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450396 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:08:48.450407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450441 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:08:48.450453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450491 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:08:48.450502 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:08:48.450514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450581 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450592 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:08:48.450603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450614 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450637 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:08:48.450647 | orchestrator | 2025-07-06 20:08:48.450658 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-07-06 20:08:48.450669 | orchestrator | Sunday 06 July 2025 20:06:19 +0000 (0:00:01.551) 0:00:13.319 *********** 2025-07-06 20:08:48.450680 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450696 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450722 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450733 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:08:48.450745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450778 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:08:48.450789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450840 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:08:48.450851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450892 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:08:48.450903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450937 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:08:48.450948 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.450963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.450999 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:08:48.451010 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-06 20:08:48.451022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.451033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.451044 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:08:48.451055 | orchestrator | 2025-07-06 20:08:48.451066 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-07-06 20:08:48.451077 | orchestrator | Sunday 06 July 2025 20:06:22 +0000 (0:00:02.841) 0:00:16.161 *********** 2025-07-06 20:08:48.451088 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:08:48.451099 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:08:48.451110 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:08:48.451120 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:08:48.451131 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:08:48.451142 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:08:48.451152 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:08:48.451163 | orchestrator | 2025-07-06 20:08:48.451174 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-07-06 20:08:48.451185 | orchestrator | Sunday 06 July 2025 20:06:22 +0000 (0:00:00.777) 0:00:16.939 *********** 2025-07-06 20:08:48.451196 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:08:48.451206 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:08:48.451217 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:08:48.451227 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:08:48.451245 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:08:48.451255 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:08:48.451266 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:08:48.451277 | orchestrator | 2025-07-06 20:08:48.451288 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-07-06 20:08:48.451298 | orchestrator | Sunday 06 July 2025 20:06:24 +0000 (0:00:01.248) 0:00:18.187 *********** 2025-07-06 20:08:48.451309 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.451321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.451342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.451354 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.451382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451410 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.451426 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.451455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.451507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451518 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451633 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451645 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451668 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.451679 | orchestrator | 2025-07-06 20:08:48.451690 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-07-06 20:08:48.451701 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:07.518) 0:00:25.706 *********** 2025-07-06 20:08:48.451716 | orchestrator | [WARNING]: Skipped 2025-07-06 20:08:48.451734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-07-06 20:08:48.451762 | orchestrator | to this access issue: 2025-07-06 20:08:48.451780 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-07-06 20:08:48.451799 | orchestrator | directory 2025-07-06 20:08:48.451818 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:08:48.451836 | orchestrator | 2025-07-06 20:08:48.451851 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-07-06 20:08:48.451863 | orchestrator | Sunday 06 July 2025 20:06:33 +0000 (0:00:01.688) 0:00:27.394 *********** 2025-07-06 20:08:48.451873 | orchestrator | [WARNING]: Skipped 2025-07-06 20:08:48.451884 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-07-06 20:08:48.451895 | orchestrator | to this access issue: 2025-07-06 20:08:48.451905 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-07-06 20:08:48.451916 | orchestrator | directory 2025-07-06 20:08:48.451927 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:08:48.451937 | orchestrator | 2025-07-06 20:08:48.451948 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-07-06 20:08:48.451959 | orchestrator | Sunday 06 July 2025 20:06:34 +0000 (0:00:00.941) 0:00:28.336 *********** 2025-07-06 20:08:48.451970 | orchestrator | [WARNING]: Skipped 2025-07-06 20:08:48.451980 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-07-06 20:08:48.451991 | orchestrator | to this access issue: 2025-07-06 20:08:48.452002 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-07-06 20:08:48.452012 | orchestrator | directory 2025-07-06 20:08:48.452023 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:08:48.452034 | orchestrator | 2025-07-06 20:08:48.452044 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-07-06 20:08:48.452055 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:00.762) 0:00:29.099 *********** 2025-07-06 20:08:48.452065 | orchestrator | [WARNING]: Skipped 2025-07-06 20:08:48.452076 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-07-06 20:08:48.452087 | orchestrator | to this access issue: 2025-07-06 20:08:48.452097 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-07-06 20:08:48.452108 | orchestrator | directory 2025-07-06 20:08:48.452119 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:08:48.452129 | orchestrator | 2025-07-06 20:08:48.452140 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-07-06 20:08:48.452150 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:00.727) 0:00:29.827 *********** 2025-07-06 20:08:48.452161 | orchestrator | changed: [testbed-manager] 2025-07-06 20:08:48.452172 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:08:48.452187 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:08:48.452198 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:08:48.452209 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:08:48.452220 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:08:48.452230 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:08:48.452241 | orchestrator | 2025-07-06 20:08:48.452251 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-07-06 20:08:48.452262 | orchestrator | Sunday 06 July 2025 20:06:39 +0000 (0:00:04.076) 0:00:33.904 *********** 2025-07-06 20:08:48.452273 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:08:48.452284 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:08:48.452295 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:08:48.452312 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:08:48.452323 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:08:48.452340 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:08:48.452351 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-06 20:08:48.452361 | orchestrator | 2025-07-06 20:08:48.452372 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-07-06 20:08:48.452383 | orchestrator | Sunday 06 July 2025 20:06:42 +0000 (0:00:02.635) 0:00:36.539 *********** 2025-07-06 20:08:48.452394 | orchestrator | changed: [testbed-manager] 2025-07-06 20:08:48.452405 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:08:48.452415 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:08:48.452426 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:08:48.452437 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:08:48.452447 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:08:48.452457 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:08:48.452468 | orchestrator | 2025-07-06 20:08:48.452479 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-07-06 20:08:48.452489 | orchestrator | Sunday 06 July 2025 20:06:45 +0000 (0:00:03.315) 0:00:39.855 *********** 2025-07-06 20:08:48.452501 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.452513 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.452524 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.452583 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.452600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.452627 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.452639 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.452651 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.452662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.452674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.452685 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.452696 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.452712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.452740 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.452752 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.452763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.452774 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.452786 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.452797 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.452808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:08:48.452830 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.452842 | orchestrator | 2025-07-06 20:08:48.452853 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-07-06 20:08:48.452864 | orchestrator | Sunday 06 July 2025 20:06:49 +0000 (0:00:03.294) 0:00:43.149 *********** 2025-07-06 20:08:48.452875 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:08:48.452886 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:08:48.452897 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:08:48.452917 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:08:48.452928 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:08:48.452939 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:08:48.452950 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-06 20:08:48.452961 | orchestrator | 2025-07-06 20:08:48.452972 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-07-06 20:08:48.452982 | orchestrator | Sunday 06 July 2025 20:06:52 +0000 (0:00:03.638) 0:00:46.788 *********** 2025-07-06 20:08:48.452993 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:08:48.453004 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:08:48.453015 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:08:48.453026 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:08:48.453036 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:08:48.453047 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:08:48.453058 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-06 20:08:48.453069 | orchestrator | 2025-07-06 20:08:48.453079 | orchestrator | TASK [common : Check common containers] **************************************** 2025-07-06 20:08:48.453090 | orchestrator | Sunday 06 July 2025 20:06:55 +0000 (0:00:02.778) 0:00:49.566 *********** 2025-07-06 20:08:48.453101 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.453113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.453125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.453147 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.453176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453199 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.453210 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.453221 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-06 20:08:48.453272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453317 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:08:48.453386 | orchestrator | 2025-07-06 20:08:48.453402 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-07-06 20:08:48.453413 | orchestrator | Sunday 06 July 2025 20:06:58 +0000 (0:00:02.791) 0:00:52.357 *********** 2025-07-06 20:08:48.453424 | orchestrator | changed: [testbed-manager] 2025-07-06 20:08:48.453435 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:08:48.453446 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:08:48.453456 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:08:48.453467 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:08:48.453478 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:08:48.453488 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:08:48.453499 | orchestrator | 2025-07-06 20:08:48.453510 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-07-06 20:08:48.453520 | orchestrator | Sunday 06 July 2025 20:06:59 +0000 (0:00:01.371) 0:00:53.728 *********** 2025-07-06 20:08:48.453531 | orchestrator | changed: [testbed-manager] 2025-07-06 20:08:48.453709 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:08:48.453729 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:08:48.453740 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:08:48.453750 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:08:48.453761 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:08:48.453771 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:08:48.453782 | orchestrator | 2025-07-06 20:08:48.453793 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:08:48.453804 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:01.384) 0:00:55.113 *********** 2025-07-06 20:08:48.453815 | orchestrator | 2025-07-06 20:08:48.453826 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:08:48.453837 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:00.310) 0:00:55.424 *********** 2025-07-06 20:08:48.453847 | orchestrator | 2025-07-06 20:08:48.453871 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:08:48.453882 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:00.080) 0:00:55.504 *********** 2025-07-06 20:08:48.453893 | orchestrator | 2025-07-06 20:08:48.453904 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:08:48.453914 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:00.075) 0:00:55.580 *********** 2025-07-06 20:08:48.453925 | orchestrator | 2025-07-06 20:08:48.453935 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:08:48.453946 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:00.085) 0:00:55.665 *********** 2025-07-06 20:08:48.453957 | orchestrator | 2025-07-06 20:08:48.453967 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:08:48.453977 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:00.103) 0:00:55.769 *********** 2025-07-06 20:08:48.453984 | orchestrator | 2025-07-06 20:08:48.453992 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-06 20:08:48.454000 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:00.092) 0:00:55.862 *********** 2025-07-06 20:08:48.454008 | orchestrator | 2025-07-06 20:08:48.454016 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-07-06 20:08:48.454054 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:00.113) 0:00:55.975 *********** 2025-07-06 20:08:48.454062 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:08:48.454070 | orchestrator | changed: [testbed-manager] 2025-07-06 20:08:48.454078 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:08:48.454086 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:08:48.454094 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:08:48.454101 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:08:48.454109 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:08:48.454117 | orchestrator | 2025-07-06 20:08:48.454125 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-07-06 20:08:48.454133 | orchestrator | Sunday 06 July 2025 20:07:48 +0000 (0:00:46.153) 0:01:42.129 *********** 2025-07-06 20:08:48.454141 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:08:48.454148 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:08:48.454156 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:08:48.454164 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:08:48.454172 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:08:48.454179 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:08:48.454187 | orchestrator | changed: [testbed-manager] 2025-07-06 20:08:48.454195 | orchestrator | 2025-07-06 20:08:48.454203 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-07-06 20:08:48.454211 | orchestrator | Sunday 06 July 2025 20:08:33 +0000 (0:00:45.795) 0:02:27.924 *********** 2025-07-06 20:08:48.454219 | orchestrator | ok: [testbed-manager] 2025-07-06 20:08:48.454227 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:08:48.454235 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:08:48.454242 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:08:48.454250 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:08:48.454258 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:08:48.454266 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:08:48.454273 | orchestrator | 2025-07-06 20:08:48.454281 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-07-06 20:08:48.454289 | orchestrator | Sunday 06 July 2025 20:08:35 +0000 (0:00:02.048) 0:02:29.973 *********** 2025-07-06 20:08:48.454297 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:08:48.454305 | orchestrator | changed: [testbed-manager] 2025-07-06 20:08:48.454312 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:08:48.454325 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:08:48.454333 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:08:48.454341 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:08:48.454348 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:08:48.454356 | orchestrator | 2025-07-06 20:08:48.454364 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:08:48.454379 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:08:48.454388 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:08:48.454406 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:08:48.454414 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:08:48.454422 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:08:48.454430 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:08:48.454438 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-06 20:08:48.454446 | orchestrator | 2025-07-06 20:08:48.454453 | orchestrator | 2025-07-06 20:08:48.454461 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:08:48.454469 | orchestrator | Sunday 06 July 2025 20:08:45 +0000 (0:00:09.357) 0:02:39.331 *********** 2025-07-06 20:08:48.454477 | orchestrator | =============================================================================== 2025-07-06 20:08:48.454485 | orchestrator | common : Restart fluentd container ------------------------------------- 46.15s 2025-07-06 20:08:48.454493 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 45.80s 2025-07-06 20:08:48.454501 | orchestrator | common : Restart cron container ----------------------------------------- 9.36s 2025-07-06 20:08:48.454509 | orchestrator | common : Copying over config.json files for services -------------------- 7.52s 2025-07-06 20:08:48.454516 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.10s 2025-07-06 20:08:48.454524 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.20s 2025-07-06 20:08:48.454532 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.08s 2025-07-06 20:08:48.454557 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.64s 2025-07-06 20:08:48.454565 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.32s 2025-07-06 20:08:48.454573 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.29s 2025-07-06 20:08:48.454581 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.84s 2025-07-06 20:08:48.454588 | orchestrator | common : Check common containers ---------------------------------------- 2.79s 2025-07-06 20:08:48.454596 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.78s 2025-07-06 20:08:48.454604 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.64s 2025-07-06 20:08:48.454612 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.05s 2025-07-06 20:08:48.454620 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.69s 2025-07-06 20:08:48.454628 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.55s 2025-07-06 20:08:48.454636 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.38s 2025-07-06 20:08:48.454643 | orchestrator | common : Creating log volume -------------------------------------------- 1.37s 2025-07-06 20:08:48.454651 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.25s 2025-07-06 20:08:48.454659 | orchestrator | 2025-07-06 20:08:48 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:08:48.454672 | orchestrator | 2025-07-06 20:08:48 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:48.454680 | orchestrator | 2025-07-06 20:08:48 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:08:48.454688 | orchestrator | 2025-07-06 20:08:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:51.487152 | orchestrator | 2025-07-06 20:08:51 | INFO  | Task e3a1ed4e-1d98-484c-802b-09edab268f81 is in state STARTED 2025-07-06 20:08:51.487729 | orchestrator | 2025-07-06 20:08:51 | INFO  | Task c9f134db-36eb-4823-9df6-61c9f4143b26 is in state STARTED 2025-07-06 20:08:51.488662 | orchestrator | 2025-07-06 20:08:51 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:08:51.489602 | orchestrator | 2025-07-06 20:08:51 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:51.490515 | orchestrator | 2025-07-06 20:08:51 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:08:51.490576 | orchestrator | 2025-07-06 20:08:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:54.515182 | orchestrator | 2025-07-06 20:08:54 | INFO  | Task e3a1ed4e-1d98-484c-802b-09edab268f81 is in state STARTED 2025-07-06 20:08:54.515474 | orchestrator | 2025-07-06 20:08:54 | INFO  | Task c9f134db-36eb-4823-9df6-61c9f4143b26 is in state STARTED 2025-07-06 20:08:54.516089 | orchestrator | 2025-07-06 20:08:54 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:08:54.516820 | orchestrator | 2025-07-06 20:08:54 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:54.517446 | orchestrator | 2025-07-06 20:08:54 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:08:54.517636 | orchestrator | 2025-07-06 20:08:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:08:57.549396 | orchestrator | 2025-07-06 20:08:57 | INFO  | Task e3a1ed4e-1d98-484c-802b-09edab268f81 is in state STARTED 2025-07-06 20:08:57.553780 | orchestrator | 2025-07-06 20:08:57 | INFO  | Task c9f134db-36eb-4823-9df6-61c9f4143b26 is in state STARTED 2025-07-06 20:08:57.554524 | orchestrator | 2025-07-06 20:08:57 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:08:57.555444 | orchestrator | 2025-07-06 20:08:57 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:08:57.558396 | orchestrator | 2025-07-06 20:08:57 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:08:57.558441 | orchestrator | 2025-07-06 20:08:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:00.590015 | orchestrator | 2025-07-06 20:09:00 | INFO  | Task e3a1ed4e-1d98-484c-802b-09edab268f81 is in state STARTED 2025-07-06 20:09:00.590999 | orchestrator | 2025-07-06 20:09:00 | INFO  | Task c9f134db-36eb-4823-9df6-61c9f4143b26 is in state STARTED 2025-07-06 20:09:00.594878 | orchestrator | 2025-07-06 20:09:00 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:00.595230 | orchestrator | 2025-07-06 20:09:00 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:00.596904 | orchestrator | 2025-07-06 20:09:00 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:00.596928 | orchestrator | 2025-07-06 20:09:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:03.630597 | orchestrator | 2025-07-06 20:09:03 | INFO  | Task e3a1ed4e-1d98-484c-802b-09edab268f81 is in state STARTED 2025-07-06 20:09:03.632822 | orchestrator | 2025-07-06 20:09:03 | INFO  | Task c9f134db-36eb-4823-9df6-61c9f4143b26 is in state STARTED 2025-07-06 20:09:03.635694 | orchestrator | 2025-07-06 20:09:03 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:03.637297 | orchestrator | 2025-07-06 20:09:03 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:03.639166 | orchestrator | 2025-07-06 20:09:03 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:03.639254 | orchestrator | 2025-07-06 20:09:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:06.676898 | orchestrator | 2025-07-06 20:09:06 | INFO  | Task e3a1ed4e-1d98-484c-802b-09edab268f81 is in state STARTED 2025-07-06 20:09:06.677647 | orchestrator | 2025-07-06 20:09:06 | INFO  | Task c9f134db-36eb-4823-9df6-61c9f4143b26 is in state STARTED 2025-07-06 20:09:06.678607 | orchestrator | 2025-07-06 20:09:06 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:06.679194 | orchestrator | 2025-07-06 20:09:06 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:06.680123 | orchestrator | 2025-07-06 20:09:06 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:06.680154 | orchestrator | 2025-07-06 20:09:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:09.715324 | orchestrator | 2025-07-06 20:09:09 | INFO  | Task e3a1ed4e-1d98-484c-802b-09edab268f81 is in state SUCCESS 2025-07-06 20:09:09.715439 | orchestrator | 2025-07-06 20:09:09 | INFO  | Task c9f134db-36eb-4823-9df6-61c9f4143b26 is in state STARTED 2025-07-06 20:09:09.715906 | orchestrator | 2025-07-06 20:09:09 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:09.716418 | orchestrator | 2025-07-06 20:09:09 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:09.717132 | orchestrator | 2025-07-06 20:09:09 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:09.717898 | orchestrator | 2025-07-06 20:09:09 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:09.717930 | orchestrator | 2025-07-06 20:09:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:12.751644 | orchestrator | 2025-07-06 20:09:12 | INFO  | Task c9f134db-36eb-4823-9df6-61c9f4143b26 is in state SUCCESS 2025-07-06 20:09:12.752440 | orchestrator | 2025-07-06 20:09:12.752503 | orchestrator | 2025-07-06 20:09:12.752603 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:09:12.752623 | orchestrator | 2025-07-06 20:09:12.752643 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:09:12.752662 | orchestrator | Sunday 06 July 2025 20:08:51 +0000 (0:00:00.302) 0:00:00.302 *********** 2025-07-06 20:09:12.752679 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:09:12.752699 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:09:12.752715 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:09:12.752730 | orchestrator | 2025-07-06 20:09:12.752746 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:09:12.752764 | orchestrator | Sunday 06 July 2025 20:08:52 +0000 (0:00:00.315) 0:00:00.617 *********** 2025-07-06 20:09:12.752783 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-07-06 20:09:12.752801 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-07-06 20:09:12.752819 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-07-06 20:09:12.752836 | orchestrator | 2025-07-06 20:09:12.752853 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-07-06 20:09:12.752870 | orchestrator | 2025-07-06 20:09:12.752887 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-07-06 20:09:12.752941 | orchestrator | Sunday 06 July 2025 20:08:52 +0000 (0:00:00.587) 0:00:01.205 *********** 2025-07-06 20:09:12.752962 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:09:12.752981 | orchestrator | 2025-07-06 20:09:12.752999 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-07-06 20:09:12.753018 | orchestrator | Sunday 06 July 2025 20:08:53 +0000 (0:00:00.545) 0:00:01.750 *********** 2025-07-06 20:09:12.753036 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-06 20:09:12.753052 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-06 20:09:12.753075 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-06 20:09:12.753095 | orchestrator | 2025-07-06 20:09:12.753112 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-07-06 20:09:12.753130 | orchestrator | Sunday 06 July 2025 20:08:54 +0000 (0:00:00.893) 0:00:02.643 *********** 2025-07-06 20:09:12.753149 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-06 20:09:12.753168 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-06 20:09:12.753185 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-06 20:09:12.753203 | orchestrator | 2025-07-06 20:09:12.753221 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-07-06 20:09:12.753240 | orchestrator | Sunday 06 July 2025 20:08:56 +0000 (0:00:02.252) 0:00:04.896 *********** 2025-07-06 20:09:12.753258 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:09:12.753276 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:09:12.753293 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:09:12.753351 | orchestrator | 2025-07-06 20:09:12.753364 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-07-06 20:09:12.753375 | orchestrator | Sunday 06 July 2025 20:08:58 +0000 (0:00:01.853) 0:00:06.750 *********** 2025-07-06 20:09:12.753386 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:09:12.753396 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:09:12.753407 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:09:12.753417 | orchestrator | 2025-07-06 20:09:12.753428 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:09:12.753440 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:09:12.753453 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:09:12.753463 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:09:12.753472 | orchestrator | 2025-07-06 20:09:12.753482 | orchestrator | 2025-07-06 20:09:12.753492 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:09:12.753501 | orchestrator | Sunday 06 July 2025 20:09:05 +0000 (0:00:07.399) 0:00:14.149 *********** 2025-07-06 20:09:12.753511 | orchestrator | =============================================================================== 2025-07-06 20:09:12.753562 | orchestrator | memcached : Restart memcached container --------------------------------- 7.40s 2025-07-06 20:09:12.753573 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.26s 2025-07-06 20:09:12.753582 | orchestrator | memcached : Check memcached container ----------------------------------- 1.85s 2025-07-06 20:09:12.753592 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.89s 2025-07-06 20:09:12.753602 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-07-06 20:09:12.753626 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.54s 2025-07-06 20:09:12.753636 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-07-06 20:09:12.753645 | orchestrator | 2025-07-06 20:09:12.753665 | orchestrator | 2025-07-06 20:09:12.753674 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:09:12.753684 | orchestrator | 2025-07-06 20:09:12.753693 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:09:12.753703 | orchestrator | Sunday 06 July 2025 20:08:51 +0000 (0:00:00.439) 0:00:00.439 *********** 2025-07-06 20:09:12.753712 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:09:12.753725 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:09:12.753735 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:09:12.753746 | orchestrator | 2025-07-06 20:09:12.753758 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:09:12.753787 | orchestrator | Sunday 06 July 2025 20:08:51 +0000 (0:00:00.338) 0:00:00.778 *********** 2025-07-06 20:09:12.753800 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-07-06 20:09:12.753811 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-07-06 20:09:12.753823 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-07-06 20:09:12.753834 | orchestrator | 2025-07-06 20:09:12.753845 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-07-06 20:09:12.753856 | orchestrator | 2025-07-06 20:09:12.753872 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-07-06 20:09:12.753889 | orchestrator | Sunday 06 July 2025 20:08:51 +0000 (0:00:00.542) 0:00:01.321 *********** 2025-07-06 20:09:12.753907 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:09:12.753924 | orchestrator | 2025-07-06 20:09:12.753942 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-07-06 20:09:12.753961 | orchestrator | Sunday 06 July 2025 20:08:52 +0000 (0:00:00.621) 0:00:01.942 *********** 2025-07-06 20:09:12.753982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.753999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754314 | orchestrator | 2025-07-06 20:09:12.754324 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-07-06 20:09:12.754334 | orchestrator | Sunday 06 July 2025 20:08:53 +0000 (0:00:01.357) 0:00:03.299 *********** 2025-07-06 20:09:12.754345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754423 | orchestrator | 2025-07-06 20:09:12.754437 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-07-06 20:09:12.754454 | orchestrator | Sunday 06 July 2025 20:08:56 +0000 (0:00:02.910) 0:00:06.210 *********** 2025-07-06 20:09:12.754471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754619 | orchestrator | 2025-07-06 20:09:12.754638 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-07-06 20:09:12.754648 | orchestrator | Sunday 06 July 2025 20:08:59 +0000 (0:00:02.712) 0:00:08.923 *********** 2025-07-06 20:09:12.754658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-06 20:09:12.754731 | orchestrator | 2025-07-06 20:09:12.754741 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-06 20:09:12.754751 | orchestrator | Sunday 06 July 2025 20:09:01 +0000 (0:00:01.977) 0:00:10.900 *********** 2025-07-06 20:09:12.754767 | orchestrator | 2025-07-06 20:09:12.754783 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-06 20:09:12.754807 | orchestrator | Sunday 06 July 2025 20:09:01 +0000 (0:00:00.074) 0:00:10.975 *********** 2025-07-06 20:09:12.754824 | orchestrator | 2025-07-06 20:09:12.754839 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-06 20:09:12.754855 | orchestrator | Sunday 06 July 2025 20:09:01 +0000 (0:00:00.094) 0:00:11.069 *********** 2025-07-06 20:09:12.754870 | orchestrator | 2025-07-06 20:09:12.754886 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-07-06 20:09:12.754902 | orchestrator | Sunday 06 July 2025 20:09:01 +0000 (0:00:00.088) 0:00:11.158 *********** 2025-07-06 20:09:12.754917 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:09:12.754934 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:09:12.754949 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:09:12.754965 | orchestrator | 2025-07-06 20:09:12.755137 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-07-06 20:09:12.755158 | orchestrator | Sunday 06 July 2025 20:09:05 +0000 (0:00:04.090) 0:00:15.249 *********** 2025-07-06 20:09:12.755174 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:09:12.755191 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:09:12.755207 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:09:12.755225 | orchestrator | 2025-07-06 20:09:12.755241 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:09:12.755257 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:09:12.755273 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:09:12.755289 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:09:12.755322 | orchestrator | 2025-07-06 20:09:12.755340 | orchestrator | 2025-07-06 20:09:12.755356 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:09:12.755369 | orchestrator | Sunday 06 July 2025 20:09:09 +0000 (0:00:03.948) 0:00:19.197 *********** 2025-07-06 20:09:12.755379 | orchestrator | =============================================================================== 2025-07-06 20:09:12.755388 | orchestrator | redis : Restart redis container ----------------------------------------- 4.09s 2025-07-06 20:09:12.755398 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.95s 2025-07-06 20:09:12.755408 | orchestrator | redis : Copying over default config.json files -------------------------- 2.91s 2025-07-06 20:09:12.755417 | orchestrator | redis : Copying over redis config files --------------------------------- 2.71s 2025-07-06 20:09:12.755426 | orchestrator | redis : Check redis containers ------------------------------------------ 1.98s 2025-07-06 20:09:12.755436 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.36s 2025-07-06 20:09:12.755445 | orchestrator | redis : include_tasks --------------------------------------------------- 0.62s 2025-07-06 20:09:12.755455 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-07-06 20:09:12.755465 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-07-06 20:09:12.755474 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.26s 2025-07-06 20:09:12.755484 | orchestrator | 2025-07-06 20:09:12 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:12.755494 | orchestrator | 2025-07-06 20:09:12 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:12.755511 | orchestrator | 2025-07-06 20:09:12 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:12.755853 | orchestrator | 2025-07-06 20:09:12 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:12.759167 | orchestrator | 2025-07-06 20:09:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:15.788357 | orchestrator | 2025-07-06 20:09:15 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:15.790825 | orchestrator | 2025-07-06 20:09:15 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:15.793921 | orchestrator | 2025-07-06 20:09:15 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:15.796184 | orchestrator | 2025-07-06 20:09:15 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:15.796679 | orchestrator | 2025-07-06 20:09:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:18.843342 | orchestrator | 2025-07-06 20:09:18 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:18.845227 | orchestrator | 2025-07-06 20:09:18 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:18.845631 | orchestrator | 2025-07-06 20:09:18 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:18.846766 | orchestrator | 2025-07-06 20:09:18 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:18.846792 | orchestrator | 2025-07-06 20:09:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:21.886317 | orchestrator | 2025-07-06 20:09:21 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:21.888864 | orchestrator | 2025-07-06 20:09:21 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:21.890716 | orchestrator | 2025-07-06 20:09:21 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:21.891658 | orchestrator | 2025-07-06 20:09:21 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:21.891718 | orchestrator | 2025-07-06 20:09:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:24.942888 | orchestrator | 2025-07-06 20:09:24 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:24.943982 | orchestrator | 2025-07-06 20:09:24 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:24.945008 | orchestrator | 2025-07-06 20:09:24 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:24.945984 | orchestrator | 2025-07-06 20:09:24 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:24.946085 | orchestrator | 2025-07-06 20:09:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:27.967944 | orchestrator | 2025-07-06 20:09:27 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:27.973297 | orchestrator | 2025-07-06 20:09:27 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:27.973377 | orchestrator | 2025-07-06 20:09:27 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:27.975192 | orchestrator | 2025-07-06 20:09:27 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:27.975623 | orchestrator | 2025-07-06 20:09:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:31.023962 | orchestrator | 2025-07-06 20:09:31 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:31.024052 | orchestrator | 2025-07-06 20:09:31 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:31.024722 | orchestrator | 2025-07-06 20:09:31 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:31.025420 | orchestrator | 2025-07-06 20:09:31 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:31.025443 | orchestrator | 2025-07-06 20:09:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:34.070687 | orchestrator | 2025-07-06 20:09:34 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:34.070827 | orchestrator | 2025-07-06 20:09:34 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:34.070856 | orchestrator | 2025-07-06 20:09:34 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:34.070877 | orchestrator | 2025-07-06 20:09:34 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:34.070897 | orchestrator | 2025-07-06 20:09:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:37.116207 | orchestrator | 2025-07-06 20:09:37 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:37.116403 | orchestrator | 2025-07-06 20:09:37 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:37.117078 | orchestrator | 2025-07-06 20:09:37 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:37.118093 | orchestrator | 2025-07-06 20:09:37 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:37.118109 | orchestrator | 2025-07-06 20:09:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:40.156573 | orchestrator | 2025-07-06 20:09:40 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:40.157644 | orchestrator | 2025-07-06 20:09:40 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:40.159248 | orchestrator | 2025-07-06 20:09:40 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:40.165533 | orchestrator | 2025-07-06 20:09:40 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:40.165613 | orchestrator | 2025-07-06 20:09:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:43.210566 | orchestrator | 2025-07-06 20:09:43 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:43.211621 | orchestrator | 2025-07-06 20:09:43 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:43.215337 | orchestrator | 2025-07-06 20:09:43 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:43.218273 | orchestrator | 2025-07-06 20:09:43 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:43.218310 | orchestrator | 2025-07-06 20:09:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:46.252790 | orchestrator | 2025-07-06 20:09:46 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:46.253169 | orchestrator | 2025-07-06 20:09:46 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:46.257684 | orchestrator | 2025-07-06 20:09:46 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:46.258207 | orchestrator | 2025-07-06 20:09:46 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:46.258241 | orchestrator | 2025-07-06 20:09:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:49.292788 | orchestrator | 2025-07-06 20:09:49 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:49.293141 | orchestrator | 2025-07-06 20:09:49 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:49.294365 | orchestrator | 2025-07-06 20:09:49 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:49.294776 | orchestrator | 2025-07-06 20:09:49 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:49.294804 | orchestrator | 2025-07-06 20:09:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:52.347056 | orchestrator | 2025-07-06 20:09:52 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:52.347921 | orchestrator | 2025-07-06 20:09:52 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state STARTED 2025-07-06 20:09:52.350833 | orchestrator | 2025-07-06 20:09:52 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:52.351417 | orchestrator | 2025-07-06 20:09:52 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:52.351546 | orchestrator | 2025-07-06 20:09:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:55.388338 | orchestrator | 2025-07-06 20:09:55 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:09:55.389410 | orchestrator | 2025-07-06 20:09:55 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:55.391288 | orchestrator | 2025-07-06 20:09:55 | INFO  | Task 5a4647f5-df6e-49f5-9e44-03e773560c63 is in state SUCCESS 2025-07-06 20:09:55.391637 | orchestrator | 2025-07-06 20:09:55.393472 | orchestrator | 2025-07-06 20:09:55.393577 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:09:55.393598 | orchestrator | 2025-07-06 20:09:55.393650 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:09:55.393673 | orchestrator | Sunday 06 July 2025 20:08:51 +0000 (0:00:00.368) 0:00:00.368 *********** 2025-07-06 20:09:55.393689 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:09:55.393701 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:09:55.393712 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:09:55.393723 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:09:55.393733 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:09:55.393744 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:09:55.393754 | orchestrator | 2025-07-06 20:09:55.393765 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:09:55.393776 | orchestrator | Sunday 06 July 2025 20:08:52 +0000 (0:00:01.131) 0:00:01.500 *********** 2025-07-06 20:09:55.393787 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:09:55.393798 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:09:55.393809 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:09:55.393834 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:09:55.393845 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:09:55.393855 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-06 20:09:55.393866 | orchestrator | 2025-07-06 20:09:55.393877 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-07-06 20:09:55.393888 | orchestrator | 2025-07-06 20:09:55.393899 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-07-06 20:09:55.393909 | orchestrator | Sunday 06 July 2025 20:08:53 +0000 (0:00:00.860) 0:00:02.361 *********** 2025-07-06 20:09:55.393922 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:09:55.393934 | orchestrator | 2025-07-06 20:09:55.393944 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-06 20:09:55.393955 | orchestrator | Sunday 06 July 2025 20:08:54 +0000 (0:00:01.196) 0:00:03.557 *********** 2025-07-06 20:09:55.393966 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-06 20:09:55.393977 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-06 20:09:55.393988 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-06 20:09:55.393998 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-06 20:09:55.394009 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-06 20:09:55.394088 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-06 20:09:55.394102 | orchestrator | 2025-07-06 20:09:55.394115 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-06 20:09:55.394128 | orchestrator | Sunday 06 July 2025 20:08:55 +0000 (0:00:01.360) 0:00:04.917 *********** 2025-07-06 20:09:55.394140 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-06 20:09:55.394153 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-06 20:09:55.394166 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-06 20:09:55.394178 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-06 20:09:55.394191 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-06 20:09:55.394204 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-06 20:09:55.394216 | orchestrator | 2025-07-06 20:09:55.394230 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-06 20:09:55.394243 | orchestrator | Sunday 06 July 2025 20:08:57 +0000 (0:00:01.634) 0:00:06.551 *********** 2025-07-06 20:09:55.394256 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-07-06 20:09:55.394278 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:09:55.394302 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-07-06 20:09:55.394315 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:09:55.394328 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-07-06 20:09:55.394341 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:09:55.394353 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-07-06 20:09:55.394365 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:09:55.394378 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-07-06 20:09:55.394390 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:09:55.394403 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-07-06 20:09:55.394416 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:09:55.394427 | orchestrator | 2025-07-06 20:09:55.394438 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-07-06 20:09:55.394448 | orchestrator | Sunday 06 July 2025 20:08:58 +0000 (0:00:01.056) 0:00:07.608 *********** 2025-07-06 20:09:55.394459 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:09:55.394470 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:09:55.394499 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:09:55.394510 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:09:55.394520 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:09:55.394531 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:09:55.394542 | orchestrator | 2025-07-06 20:09:55.394552 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-07-06 20:09:55.394563 | orchestrator | Sunday 06 July 2025 20:08:59 +0000 (0:00:00.804) 0:00:08.412 *********** 2025-07-06 20:09:55.394599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394621 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394677 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394785 | orchestrator | 2025-07-06 20:09:55.394797 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-07-06 20:09:55.394808 | orchestrator | Sunday 06 July 2025 20:09:00 +0000 (0:00:01.655) 0:00:10.068 *********** 2025-07-06 20:09:55.394820 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394922 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394951 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.394993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395005 | orchestrator | 2025-07-06 20:09:55.395016 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-07-06 20:09:55.395027 | orchestrator | Sunday 06 July 2025 20:09:04 +0000 (0:00:03.303) 0:00:13.371 *********** 2025-07-06 20:09:55.395038 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:09:55.395050 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:09:55.395061 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:09:55.395071 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:09:55.395082 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:09:55.395092 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:09:55.395103 | orchestrator | 2025-07-06 20:09:55.395114 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-07-06 20:09:55.395130 | orchestrator | Sunday 06 July 2025 20:09:05 +0000 (0:00:01.107) 0:00:14.479 *********** 2025-07-06 20:09:55.395142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395171 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395228 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-06 20:09:55.395405 | orchestrator | 2025-07-06 20:09:55.395417 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:09:55.395428 | orchestrator | Sunday 06 July 2025 20:09:08 +0000 (0:00:03.551) 0:00:18.030 *********** 2025-07-06 20:09:55.395438 | orchestrator | 2025-07-06 20:09:55.395449 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:09:55.395460 | orchestrator | Sunday 06 July 2025 20:09:09 +0000 (0:00:00.240) 0:00:18.271 *********** 2025-07-06 20:09:55.395471 | orchestrator | 2025-07-06 20:09:55.395523 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:09:55.395535 | orchestrator | Sunday 06 July 2025 20:09:09 +0000 (0:00:00.166) 0:00:18.438 *********** 2025-07-06 20:09:55.395546 | orchestrator | 2025-07-06 20:09:55.395557 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:09:55.395568 | orchestrator | Sunday 06 July 2025 20:09:09 +0000 (0:00:00.194) 0:00:18.632 *********** 2025-07-06 20:09:55.395579 | orchestrator | 2025-07-06 20:09:55.395590 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:09:55.395601 | orchestrator | Sunday 06 July 2025 20:09:09 +0000 (0:00:00.141) 0:00:18.774 *********** 2025-07-06 20:09:55.395612 | orchestrator | 2025-07-06 20:09:55.395622 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-06 20:09:55.395633 | orchestrator | Sunday 06 July 2025 20:09:09 +0000 (0:00:00.180) 0:00:18.954 *********** 2025-07-06 20:09:55.395644 | orchestrator | 2025-07-06 20:09:55.395655 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-07-06 20:09:55.395666 | orchestrator | Sunday 06 July 2025 20:09:10 +0000 (0:00:00.280) 0:00:19.234 *********** 2025-07-06 20:09:55.395677 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:09:55.395688 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:09:55.395699 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:09:55.395709 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:09:55.395720 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:09:55.395731 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:09:55.395742 | orchestrator | 2025-07-06 20:09:55.395753 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-07-06 20:09:55.395764 | orchestrator | Sunday 06 July 2025 20:09:20 +0000 (0:00:10.798) 0:00:30.032 *********** 2025-07-06 20:09:55.395774 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:09:55.395786 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:09:55.395796 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:09:55.395807 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:09:55.395818 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:09:55.395829 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:09:55.395839 | orchestrator | 2025-07-06 20:09:55.395850 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-06 20:09:55.395861 | orchestrator | Sunday 06 July 2025 20:09:22 +0000 (0:00:01.561) 0:00:31.594 *********** 2025-07-06 20:09:55.395872 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:09:55.395883 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:09:55.395893 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:09:55.395904 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:09:55.395915 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:09:55.395925 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:09:55.395936 | orchestrator | 2025-07-06 20:09:55.395947 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-07-06 20:09:55.395958 | orchestrator | Sunday 06 July 2025 20:09:31 +0000 (0:00:09.281) 0:00:40.875 *********** 2025-07-06 20:09:55.395978 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-07-06 20:09:55.395990 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-07-06 20:09:55.396001 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-07-06 20:09:55.396012 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-07-06 20:09:55.396023 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-07-06 20:09:55.396040 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-07-06 20:09:55.396051 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-07-06 20:09:55.396062 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-07-06 20:09:55.396073 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-07-06 20:09:55.396084 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-07-06 20:09:55.396095 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-07-06 20:09:55.396106 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-07-06 20:09:55.396117 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:09:55.396133 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:09:55.396144 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:09:55.396155 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:09:55.396165 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:09:55.396181 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-06 20:09:55.396201 | orchestrator | 2025-07-06 20:09:55.396220 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-07-06 20:09:55.396239 | orchestrator | Sunday 06 July 2025 20:09:38 +0000 (0:00:07.202) 0:00:48.078 *********** 2025-07-06 20:09:55.396258 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-07-06 20:09:55.396278 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:09:55.396298 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-07-06 20:09:55.396318 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:09:55.396336 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-07-06 20:09:55.396347 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:09:55.396359 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-07-06 20:09:55.396374 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-07-06 20:09:55.396393 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-07-06 20:09:55.396412 | orchestrator | 2025-07-06 20:09:55.396430 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-07-06 20:09:55.396445 | orchestrator | Sunday 06 July 2025 20:09:41 +0000 (0:00:02.519) 0:00:50.597 *********** 2025-07-06 20:09:55.396461 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-07-06 20:09:55.396543 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:09:55.396566 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-07-06 20:09:55.396605 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:09:55.396625 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-07-06 20:09:55.396642 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:09:55.396661 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-07-06 20:09:55.396680 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-07-06 20:09:55.396698 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-07-06 20:09:55.396709 | orchestrator | 2025-07-06 20:09:55.396721 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-06 20:09:55.396731 | orchestrator | Sunday 06 July 2025 20:09:44 +0000 (0:00:03.472) 0:00:54.070 *********** 2025-07-06 20:09:55.396742 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:09:55.396753 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:09:55.396927 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:09:55.396939 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:09:55.396949 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:09:55.396960 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:09:55.396971 | orchestrator | 2025-07-06 20:09:55.396982 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:09:55.396992 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:09:55.397004 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:09:55.397014 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:09:55.397025 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:09:55.397042 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:09:55.397071 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:09:55.397087 | orchestrator | 2025-07-06 20:09:55.397102 | orchestrator | 2025-07-06 20:09:55.397116 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:09:55.397129 | orchestrator | Sunday 06 July 2025 20:09:53 +0000 (0:00:08.487) 0:01:02.557 *********** 2025-07-06 20:09:55.397146 | orchestrator | =============================================================================== 2025-07-06 20:09:55.397161 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.77s 2025-07-06 20:09:55.397179 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.80s 2025-07-06 20:09:55.397191 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.20s 2025-07-06 20:09:55.397201 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.55s 2025-07-06 20:09:55.397210 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.47s 2025-07-06 20:09:55.397220 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.30s 2025-07-06 20:09:55.397229 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.52s 2025-07-06 20:09:55.397239 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.66s 2025-07-06 20:09:55.397248 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.63s 2025-07-06 20:09:55.397258 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.56s 2025-07-06 20:09:55.397267 | orchestrator | module-load : Load modules ---------------------------------------------- 1.36s 2025-07-06 20:09:55.397287 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.20s 2025-07-06 20:09:55.397297 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.20s 2025-07-06 20:09:55.397307 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.13s 2025-07-06 20:09:55.397316 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.11s 2025-07-06 20:09:55.397326 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.06s 2025-07-06 20:09:55.397335 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.86s 2025-07-06 20:09:55.397344 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.80s 2025-07-06 20:09:55.397354 | orchestrator | 2025-07-06 20:09:55 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:55.397364 | orchestrator | 2025-07-06 20:09:55 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:55.397373 | orchestrator | 2025-07-06 20:09:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:09:58.429817 | orchestrator | 2025-07-06 20:09:58 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:09:58.430979 | orchestrator | 2025-07-06 20:09:58 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:09:58.431906 | orchestrator | 2025-07-06 20:09:58 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:09:58.433654 | orchestrator | 2025-07-06 20:09:58 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:09:58.433696 | orchestrator | 2025-07-06 20:09:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:01.471998 | orchestrator | 2025-07-06 20:10:01 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:01.472208 | orchestrator | 2025-07-06 20:10:01 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:01.472227 | orchestrator | 2025-07-06 20:10:01 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:01.472239 | orchestrator | 2025-07-06 20:10:01 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:01.472251 | orchestrator | 2025-07-06 20:10:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:04.507719 | orchestrator | 2025-07-06 20:10:04 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:04.507831 | orchestrator | 2025-07-06 20:10:04 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:04.509267 | orchestrator | 2025-07-06 20:10:04 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:04.511309 | orchestrator | 2025-07-06 20:10:04 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:04.511368 | orchestrator | 2025-07-06 20:10:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:07.556666 | orchestrator | 2025-07-06 20:10:07 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:07.556787 | orchestrator | 2025-07-06 20:10:07 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:07.557464 | orchestrator | 2025-07-06 20:10:07 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:07.558362 | orchestrator | 2025-07-06 20:10:07 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:07.558448 | orchestrator | 2025-07-06 20:10:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:10.598169 | orchestrator | 2025-07-06 20:10:10 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:10.598624 | orchestrator | 2025-07-06 20:10:10 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:10.599576 | orchestrator | 2025-07-06 20:10:10 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:10.600600 | orchestrator | 2025-07-06 20:10:10 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:10.600663 | orchestrator | 2025-07-06 20:10:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:13.638263 | orchestrator | 2025-07-06 20:10:13 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:13.638531 | orchestrator | 2025-07-06 20:10:13 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:13.640950 | orchestrator | 2025-07-06 20:10:13 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:13.641678 | orchestrator | 2025-07-06 20:10:13 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:13.641823 | orchestrator | 2025-07-06 20:10:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:16.673660 | orchestrator | 2025-07-06 20:10:16 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:16.674516 | orchestrator | 2025-07-06 20:10:16 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:16.675206 | orchestrator | 2025-07-06 20:10:16 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:16.675227 | orchestrator | 2025-07-06 20:10:16 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:16.675237 | orchestrator | 2025-07-06 20:10:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:19.713079 | orchestrator | 2025-07-06 20:10:19 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:19.716042 | orchestrator | 2025-07-06 20:10:19 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:19.716948 | orchestrator | 2025-07-06 20:10:19 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:19.717964 | orchestrator | 2025-07-06 20:10:19 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:19.717993 | orchestrator | 2025-07-06 20:10:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:22.748612 | orchestrator | 2025-07-06 20:10:22 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:22.749328 | orchestrator | 2025-07-06 20:10:22 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:22.750332 | orchestrator | 2025-07-06 20:10:22 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:22.751629 | orchestrator | 2025-07-06 20:10:22 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:22.751652 | orchestrator | 2025-07-06 20:10:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:25.790240 | orchestrator | 2025-07-06 20:10:25 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:25.791923 | orchestrator | 2025-07-06 20:10:25 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:25.795710 | orchestrator | 2025-07-06 20:10:25 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:25.799706 | orchestrator | 2025-07-06 20:10:25 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:25.800066 | orchestrator | 2025-07-06 20:10:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:28.842194 | orchestrator | 2025-07-06 20:10:28 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:28.844192 | orchestrator | 2025-07-06 20:10:28 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:28.844267 | orchestrator | 2025-07-06 20:10:28 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:28.849716 | orchestrator | 2025-07-06 20:10:28 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:28.849763 | orchestrator | 2025-07-06 20:10:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:31.906810 | orchestrator | 2025-07-06 20:10:31 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:31.907240 | orchestrator | 2025-07-06 20:10:31 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:31.908125 | orchestrator | 2025-07-06 20:10:31 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:31.908960 | orchestrator | 2025-07-06 20:10:31 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:31.909062 | orchestrator | 2025-07-06 20:10:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:34.952629 | orchestrator | 2025-07-06 20:10:34 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:34.954992 | orchestrator | 2025-07-06 20:10:34 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:34.957307 | orchestrator | 2025-07-06 20:10:34 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:34.958558 | orchestrator | 2025-07-06 20:10:34 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:34.958584 | orchestrator | 2025-07-06 20:10:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:38.012143 | orchestrator | 2025-07-06 20:10:38 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:38.014768 | orchestrator | 2025-07-06 20:10:38 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:38.017748 | orchestrator | 2025-07-06 20:10:38 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:38.019564 | orchestrator | 2025-07-06 20:10:38 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:38.019604 | orchestrator | 2025-07-06 20:10:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:41.066531 | orchestrator | 2025-07-06 20:10:41 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:41.066970 | orchestrator | 2025-07-06 20:10:41 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:41.069155 | orchestrator | 2025-07-06 20:10:41 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:41.071098 | orchestrator | 2025-07-06 20:10:41 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:41.071572 | orchestrator | 2025-07-06 20:10:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:44.119042 | orchestrator | 2025-07-06 20:10:44 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:44.121964 | orchestrator | 2025-07-06 20:10:44 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:44.124314 | orchestrator | 2025-07-06 20:10:44 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:44.125865 | orchestrator | 2025-07-06 20:10:44 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:44.126100 | orchestrator | 2025-07-06 20:10:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:47.170822 | orchestrator | 2025-07-06 20:10:47 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:47.174958 | orchestrator | 2025-07-06 20:10:47 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:47.175018 | orchestrator | 2025-07-06 20:10:47 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:47.176257 | orchestrator | 2025-07-06 20:10:47 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:47.176940 | orchestrator | 2025-07-06 20:10:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:50.214492 | orchestrator | 2025-07-06 20:10:50 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:50.215084 | orchestrator | 2025-07-06 20:10:50 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:50.219940 | orchestrator | 2025-07-06 20:10:50 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:50.228206 | orchestrator | 2025-07-06 20:10:50 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:50.228283 | orchestrator | 2025-07-06 20:10:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:53.263252 | orchestrator | 2025-07-06 20:10:53 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:53.263659 | orchestrator | 2025-07-06 20:10:53 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:53.265613 | orchestrator | 2025-07-06 20:10:53 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:53.267556 | orchestrator | 2025-07-06 20:10:53 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:53.268074 | orchestrator | 2025-07-06 20:10:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:56.323162 | orchestrator | 2025-07-06 20:10:56 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:56.323827 | orchestrator | 2025-07-06 20:10:56 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:56.324936 | orchestrator | 2025-07-06 20:10:56 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:56.329040 | orchestrator | 2025-07-06 20:10:56 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:56.329123 | orchestrator | 2025-07-06 20:10:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:10:59.379733 | orchestrator | 2025-07-06 20:10:59 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:10:59.385252 | orchestrator | 2025-07-06 20:10:59 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:10:59.386181 | orchestrator | 2025-07-06 20:10:59 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:10:59.388711 | orchestrator | 2025-07-06 20:10:59 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:10:59.389823 | orchestrator | 2025-07-06 20:10:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:02.435317 | orchestrator | 2025-07-06 20:11:02 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:02.438214 | orchestrator | 2025-07-06 20:11:02 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:11:02.439399 | orchestrator | 2025-07-06 20:11:02 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:02.441319 | orchestrator | 2025-07-06 20:11:02 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:02.441553 | orchestrator | 2025-07-06 20:11:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:05.490533 | orchestrator | 2025-07-06 20:11:05 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:05.492966 | orchestrator | 2025-07-06 20:11:05 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:11:05.493057 | orchestrator | 2025-07-06 20:11:05 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:05.494001 | orchestrator | 2025-07-06 20:11:05 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:05.494238 | orchestrator | 2025-07-06 20:11:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:08.546009 | orchestrator | 2025-07-06 20:11:08 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:08.547585 | orchestrator | 2025-07-06 20:11:08 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:11:08.549319 | orchestrator | 2025-07-06 20:11:08 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:08.551336 | orchestrator | 2025-07-06 20:11:08 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:08.551413 | orchestrator | 2025-07-06 20:11:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:11.597995 | orchestrator | 2025-07-06 20:11:11 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:11.600766 | orchestrator | 2025-07-06 20:11:11 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:11:11.602382 | orchestrator | 2025-07-06 20:11:11 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:11.604077 | orchestrator | 2025-07-06 20:11:11 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:11.604105 | orchestrator | 2025-07-06 20:11:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:14.648509 | orchestrator | 2025-07-06 20:11:14 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:14.648777 | orchestrator | 2025-07-06 20:11:14 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:11:14.648829 | orchestrator | 2025-07-06 20:11:14 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:14.649701 | orchestrator | 2025-07-06 20:11:14 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:14.649818 | orchestrator | 2025-07-06 20:11:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:17.695771 | orchestrator | 2025-07-06 20:11:17 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:17.695992 | orchestrator | 2025-07-06 20:11:17 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:11:17.700934 | orchestrator | 2025-07-06 20:11:17 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:17.701017 | orchestrator | 2025-07-06 20:11:17 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:17.701058 | orchestrator | 2025-07-06 20:11:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:20.734728 | orchestrator | 2025-07-06 20:11:20 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:20.734975 | orchestrator | 2025-07-06 20:11:20 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:11:20.737223 | orchestrator | 2025-07-06 20:11:20 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:20.738074 | orchestrator | 2025-07-06 20:11:20 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:20.738114 | orchestrator | 2025-07-06 20:11:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:23.771625 | orchestrator | 2025-07-06 20:11:23 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:23.771991 | orchestrator | 2025-07-06 20:11:23 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state STARTED 2025-07-06 20:11:23.772898 | orchestrator | 2025-07-06 20:11:23 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:23.773542 | orchestrator | 2025-07-06 20:11:23 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:23.775808 | orchestrator | 2025-07-06 20:11:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:26.822759 | orchestrator | 2025-07-06 20:11:26 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:26.822874 | orchestrator | 2025-07-06 20:11:26 | INFO  | Task 8f3c10b3-4942-4022-b704-474f881ff5a2 is in state SUCCESS 2025-07-06 20:11:26.823949 | orchestrator | 2025-07-06 20:11:26.824010 | orchestrator | 2025-07-06 20:11:26.824027 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-07-06 20:11:26.824041 | orchestrator | 2025-07-06 20:11:26.824054 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-06 20:11:26.824068 | orchestrator | Sunday 06 July 2025 20:09:13 +0000 (0:00:00.092) 0:00:00.092 *********** 2025-07-06 20:11:26.824082 | orchestrator | ok: [localhost] => { 2025-07-06 20:11:26.824098 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-07-06 20:11:26.824111 | orchestrator | } 2025-07-06 20:11:26.824125 | orchestrator | 2025-07-06 20:11:26.824138 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-07-06 20:11:26.824152 | orchestrator | Sunday 06 July 2025 20:09:13 +0000 (0:00:00.044) 0:00:00.136 *********** 2025-07-06 20:11:26.824167 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-07-06 20:11:26.824182 | orchestrator | ...ignoring 2025-07-06 20:11:26.824194 | orchestrator | 2025-07-06 20:11:26.824207 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-07-06 20:11:26.824221 | orchestrator | Sunday 06 July 2025 20:09:15 +0000 (0:00:02.734) 0:00:02.871 *********** 2025-07-06 20:11:26.824234 | orchestrator | skipping: [localhost] 2025-07-06 20:11:26.824247 | orchestrator | 2025-07-06 20:11:26.824259 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-07-06 20:11:26.824268 | orchestrator | Sunday 06 July 2025 20:09:15 +0000 (0:00:00.056) 0:00:02.927 *********** 2025-07-06 20:11:26.824276 | orchestrator | ok: [localhost] 2025-07-06 20:11:26.824284 | orchestrator | 2025-07-06 20:11:26.824292 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:11:26.824300 | orchestrator | 2025-07-06 20:11:26.824308 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:11:26.824316 | orchestrator | Sunday 06 July 2025 20:09:16 +0000 (0:00:00.144) 0:00:03.072 *********** 2025-07-06 20:11:26.824348 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:11:26.824357 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:11:26.824364 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:11:26.824372 | orchestrator | 2025-07-06 20:11:26.824380 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:11:26.824388 | orchestrator | Sunday 06 July 2025 20:09:16 +0000 (0:00:00.539) 0:00:03.612 *********** 2025-07-06 20:11:26.824396 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-07-06 20:11:26.824427 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-07-06 20:11:26.824436 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-07-06 20:11:26.824443 | orchestrator | 2025-07-06 20:11:26.824451 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-07-06 20:11:26.824459 | orchestrator | 2025-07-06 20:11:26.824467 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-06 20:11:26.824475 | orchestrator | Sunday 06 July 2025 20:09:17 +0000 (0:00:00.680) 0:00:04.293 *********** 2025-07-06 20:11:26.824483 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:11:26.824491 | orchestrator | 2025-07-06 20:11:26.824499 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-06 20:11:26.824521 | orchestrator | Sunday 06 July 2025 20:09:17 +0000 (0:00:00.577) 0:00:04.870 *********** 2025-07-06 20:11:26.824531 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:11:26.824540 | orchestrator | 2025-07-06 20:11:26.824548 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-07-06 20:11:26.824557 | orchestrator | Sunday 06 July 2025 20:09:18 +0000 (0:00:00.946) 0:00:05.817 *********** 2025-07-06 20:11:26.824566 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:11:26.824576 | orchestrator | 2025-07-06 20:11:26.824584 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-07-06 20:11:26.824593 | orchestrator | Sunday 06 July 2025 20:09:19 +0000 (0:00:00.367) 0:00:06.184 *********** 2025-07-06 20:11:26.824602 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:11:26.824611 | orchestrator | 2025-07-06 20:11:26.824620 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-07-06 20:11:26.824630 | orchestrator | Sunday 06 July 2025 20:09:19 +0000 (0:00:00.402) 0:00:06.586 *********** 2025-07-06 20:11:26.824639 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:11:26.824647 | orchestrator | 2025-07-06 20:11:26.824656 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-07-06 20:11:26.824665 | orchestrator | Sunday 06 July 2025 20:09:20 +0000 (0:00:00.407) 0:00:06.994 *********** 2025-07-06 20:11:26.824674 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:11:26.824683 | orchestrator | 2025-07-06 20:11:26.824692 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-06 20:11:26.824701 | orchestrator | Sunday 06 July 2025 20:09:20 +0000 (0:00:00.784) 0:00:07.782 *********** 2025-07-06 20:11:26.824711 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:11:26.824720 | orchestrator | 2025-07-06 20:11:26.824729 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-06 20:11:26.824738 | orchestrator | Sunday 06 July 2025 20:09:21 +0000 (0:00:01.144) 0:00:08.927 *********** 2025-07-06 20:11:26.824747 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:11:26.824756 | orchestrator | 2025-07-06 20:11:26.824765 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-07-06 20:11:26.824773 | orchestrator | Sunday 06 July 2025 20:09:23 +0000 (0:00:01.073) 0:00:10.001 *********** 2025-07-06 20:11:26.824782 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:11:26.824791 | orchestrator | 2025-07-06 20:11:26.824800 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-07-06 20:11:26.824808 | orchestrator | Sunday 06 July 2025 20:09:24 +0000 (0:00:01.266) 0:00:11.267 *********** 2025-07-06 20:11:26.824823 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:11:26.824832 | orchestrator | 2025-07-06 20:11:26.824856 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-07-06 20:11:26.824865 | orchestrator | Sunday 06 July 2025 20:09:25 +0000 (0:00:01.325) 0:00:12.595 *********** 2025-07-06 20:11:26.824879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:11:26.824893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:11:26.824908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:11:26.824917 | orchestrator | 2025-07-06 20:11:26.824925 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-07-06 20:11:26.824933 | orchestrator | Sunday 06 July 2025 20:09:26 +0000 (0:00:01.042) 0:00:13.638 *********** 2025-07-06 20:11:26.824953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:11:26.824977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:11:26.824997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:11:26.825012 | orchestrator | 2025-07-06 20:11:26.825025 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-07-06 20:11:26.825038 | orchestrator | Sunday 06 July 2025 20:09:28 +0000 (0:00:01.612) 0:00:15.251 *********** 2025-07-06 20:11:26.825047 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-06 20:11:26.825055 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-06 20:11:26.825064 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-06 20:11:26.825072 | orchestrator | 2025-07-06 20:11:26.825079 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-07-06 20:11:26.825087 | orchestrator | Sunday 06 July 2025 20:09:29 +0000 (0:00:01.606) 0:00:16.858 *********** 2025-07-06 20:11:26.825095 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-06 20:11:26.825103 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-06 20:11:26.825116 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-06 20:11:26.825124 | orchestrator | 2025-07-06 20:11:26.825132 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-07-06 20:11:26.825140 | orchestrator | Sunday 06 July 2025 20:09:31 +0000 (0:00:01.665) 0:00:18.523 *********** 2025-07-06 20:11:26.825147 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-06 20:11:26.825155 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-06 20:11:26.825163 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-06 20:11:26.825170 | orchestrator | 2025-07-06 20:11:26.825178 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-07-06 20:11:26.825186 | orchestrator | Sunday 06 July 2025 20:09:32 +0000 (0:00:01.358) 0:00:19.882 *********** 2025-07-06 20:11:26.825199 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-06 20:11:26.825207 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-06 20:11:26.825215 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-06 20:11:26.825222 | orchestrator | 2025-07-06 20:11:26.825230 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-07-06 20:11:26.825238 | orchestrator | Sunday 06 July 2025 20:09:34 +0000 (0:00:01.835) 0:00:21.717 *********** 2025-07-06 20:11:26.825246 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-06 20:11:26.825253 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-06 20:11:26.825261 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-06 20:11:26.825269 | orchestrator | 2025-07-06 20:11:26.825276 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-07-06 20:11:26.825284 | orchestrator | Sunday 06 July 2025 20:09:36 +0000 (0:00:01.510) 0:00:23.228 *********** 2025-07-06 20:11:26.825292 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-06 20:11:26.825300 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-06 20:11:26.825308 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-06 20:11:26.825315 | orchestrator | 2025-07-06 20:11:26.825323 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-06 20:11:26.825331 | orchestrator | Sunday 06 July 2025 20:09:37 +0000 (0:00:01.738) 0:00:24.966 *********** 2025-07-06 20:11:26.825339 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:11:26.825346 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:11:26.825354 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:11:26.825362 | orchestrator | 2025-07-06 20:11:26.825369 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-07-06 20:11:26.825377 | orchestrator | Sunday 06 July 2025 20:09:38 +0000 (0:00:00.426) 0:00:25.393 *********** 2025-07-06 20:11:26.825396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:11:26.825482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:11:26.825500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:11:26.825510 | orchestrator | 2025-07-06 20:11:26.825518 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-07-06 20:11:26.825525 | orchestrator | Sunday 06 July 2025 20:09:39 +0000 (0:00:01.375) 0:00:26.769 *********** 2025-07-06 20:11:26.825533 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:11:26.825541 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:11:26.825549 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:11:26.825557 | orchestrator | 2025-07-06 20:11:26.825564 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-07-06 20:11:26.825572 | orchestrator | Sunday 06 July 2025 20:09:40 +0000 (0:00:00.890) 0:00:27.659 *********** 2025-07-06 20:11:26.825580 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:11:26.825588 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:11:26.825595 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:11:26.825603 | orchestrator | 2025-07-06 20:11:26.825611 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-07-06 20:11:26.825619 | orchestrator | Sunday 06 July 2025 20:09:48 +0000 (0:00:08.150) 0:00:35.810 *********** 2025-07-06 20:11:26.825626 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:11:26.825634 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:11:26.825641 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:11:26.825649 | orchestrator | 2025-07-06 20:11:26.825657 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-06 20:11:26.825664 | orchestrator | 2025-07-06 20:11:26.825678 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-06 20:11:26.825686 | orchestrator | Sunday 06 July 2025 20:09:49 +0000 (0:00:00.360) 0:00:36.170 *********** 2025-07-06 20:11:26.825694 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:11:26.825702 | orchestrator | 2025-07-06 20:11:26.825709 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-06 20:11:26.825717 | orchestrator | Sunday 06 July 2025 20:09:49 +0000 (0:00:00.588) 0:00:36.759 *********** 2025-07-06 20:11:26.825725 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:11:26.825732 | orchestrator | 2025-07-06 20:11:26.825740 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-06 20:11:26.825748 | orchestrator | Sunday 06 July 2025 20:09:50 +0000 (0:00:00.237) 0:00:36.997 *********** 2025-07-06 20:11:26.825754 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:11:26.825761 | orchestrator | 2025-07-06 20:11:26.825771 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-06 20:11:26.825778 | orchestrator | Sunday 06 July 2025 20:09:56 +0000 (0:00:06.723) 0:00:43.720 *********** 2025-07-06 20:11:26.825784 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:11:26.825791 | orchestrator | 2025-07-06 20:11:26.825797 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-06 20:11:26.825804 | orchestrator | 2025-07-06 20:11:26.825810 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-06 20:11:26.825817 | orchestrator | Sunday 06 July 2025 20:10:45 +0000 (0:00:49.159) 0:01:32.880 *********** 2025-07-06 20:11:26.825824 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:11:26.825830 | orchestrator | 2025-07-06 20:11:26.825837 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-06 20:11:26.825843 | orchestrator | Sunday 06 July 2025 20:10:46 +0000 (0:00:00.636) 0:01:33.516 *********** 2025-07-06 20:11:26.825850 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:11:26.825856 | orchestrator | 2025-07-06 20:11:26.825863 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-06 20:11:26.825870 | orchestrator | Sunday 06 July 2025 20:10:46 +0000 (0:00:00.434) 0:01:33.951 *********** 2025-07-06 20:11:26.825876 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:11:26.825883 | orchestrator | 2025-07-06 20:11:26.825889 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-06 20:11:26.825896 | orchestrator | Sunday 06 July 2025 20:10:48 +0000 (0:00:01.993) 0:01:35.944 *********** 2025-07-06 20:11:26.825902 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:11:26.825909 | orchestrator | 2025-07-06 20:11:26.825915 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-06 20:11:26.825922 | orchestrator | 2025-07-06 20:11:26.825928 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-06 20:11:26.825935 | orchestrator | Sunday 06 July 2025 20:11:04 +0000 (0:00:15.325) 0:01:51.270 *********** 2025-07-06 20:11:26.825941 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:11:26.825948 | orchestrator | 2025-07-06 20:11:26.825954 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-06 20:11:26.825961 | orchestrator | Sunday 06 July 2025 20:11:04 +0000 (0:00:00.586) 0:01:51.856 *********** 2025-07-06 20:11:26.825968 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:11:26.825974 | orchestrator | 2025-07-06 20:11:26.825980 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-06 20:11:26.825987 | orchestrator | Sunday 06 July 2025 20:11:05 +0000 (0:00:00.292) 0:01:52.149 *********** 2025-07-06 20:11:26.825994 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:11:26.826000 | orchestrator | 2025-07-06 20:11:26.826007 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-06 20:11:26.826061 | orchestrator | Sunday 06 July 2025 20:11:06 +0000 (0:00:01.667) 0:01:53.816 *********** 2025-07-06 20:11:26.826071 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:11:26.826077 | orchestrator | 2025-07-06 20:11:26.826084 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-07-06 20:11:26.826097 | orchestrator | 2025-07-06 20:11:26.826130 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-07-06 20:11:26.826137 | orchestrator | Sunday 06 July 2025 20:11:21 +0000 (0:00:14.706) 0:02:08.523 *********** 2025-07-06 20:11:26.826144 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:11:26.826151 | orchestrator | 2025-07-06 20:11:26.826157 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-07-06 20:11:26.826164 | orchestrator | Sunday 06 July 2025 20:11:22 +0000 (0:00:00.720) 0:02:09.244 *********** 2025-07-06 20:11:26.826171 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-06 20:11:26.826177 | orchestrator | enable_outward_rabbitmq_True 2025-07-06 20:11:26.826184 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-06 20:11:26.826190 | orchestrator | outward_rabbitmq_restart 2025-07-06 20:11:26.826197 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:11:26.826203 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:11:26.826210 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:11:26.826216 | orchestrator | 2025-07-06 20:11:26.826223 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-07-06 20:11:26.826230 | orchestrator | skipping: no hosts matched 2025-07-06 20:11:26.826236 | orchestrator | 2025-07-06 20:11:26.826243 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-07-06 20:11:26.826250 | orchestrator | skipping: no hosts matched 2025-07-06 20:11:26.826256 | orchestrator | 2025-07-06 20:11:26.826263 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-07-06 20:11:26.826269 | orchestrator | skipping: no hosts matched 2025-07-06 20:11:26.826276 | orchestrator | 2025-07-06 20:11:26.826283 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:11:26.826289 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-06 20:11:26.826298 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:11:26.826305 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:11:26.826311 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:11:26.826318 | orchestrator | 2025-07-06 20:11:26.826324 | orchestrator | 2025-07-06 20:11:26.826331 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:11:26.826338 | orchestrator | Sunday 06 July 2025 20:11:25 +0000 (0:00:02.901) 0:02:12.145 *********** 2025-07-06 20:11:26.826344 | orchestrator | =============================================================================== 2025-07-06 20:11:26.826355 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.19s 2025-07-06 20:11:26.826362 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.38s 2025-07-06 20:11:26.826368 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.15s 2025-07-06 20:11:26.826375 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.90s 2025-07-06 20:11:26.826381 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.74s 2025-07-06 20:11:26.826388 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.84s 2025-07-06 20:11:26.826395 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.81s 2025-07-06 20:11:26.826418 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.74s 2025-07-06 20:11:26.826425 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.67s 2025-07-06 20:11:26.826440 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.61s 2025-07-06 20:11:26.826446 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.61s 2025-07-06 20:11:26.826453 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.51s 2025-07-06 20:11:26.826459 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.38s 2025-07-06 20:11:26.826466 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.36s 2025-07-06 20:11:26.826472 | orchestrator | rabbitmq : Remove ha-all policy from RabbitMQ --------------------------- 1.33s 2025-07-06 20:11:26.826479 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.27s 2025-07-06 20:11:26.826485 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.14s 2025-07-06 20:11:26.826492 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.07s 2025-07-06 20:11:26.826498 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.04s 2025-07-06 20:11:26.826505 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.96s 2025-07-06 20:11:26.827621 | orchestrator | 2025-07-06 20:11:26 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:26.828175 | orchestrator | 2025-07-06 20:11:26 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:26.828237 | orchestrator | 2025-07-06 20:11:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:29.867483 | orchestrator | 2025-07-06 20:11:29 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:29.870754 | orchestrator | 2025-07-06 20:11:29 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:29.871627 | orchestrator | 2025-07-06 20:11:29 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:29.871679 | orchestrator | 2025-07-06 20:11:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:32.912199 | orchestrator | 2025-07-06 20:11:32 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:32.913032 | orchestrator | 2025-07-06 20:11:32 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:32.914096 | orchestrator | 2025-07-06 20:11:32 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:32.914121 | orchestrator | 2025-07-06 20:11:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:35.955642 | orchestrator | 2025-07-06 20:11:35 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:35.958446 | orchestrator | 2025-07-06 20:11:35 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:35.960993 | orchestrator | 2025-07-06 20:11:35 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:35.961021 | orchestrator | 2025-07-06 20:11:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:38.990607 | orchestrator | 2025-07-06 20:11:38 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:38.993178 | orchestrator | 2025-07-06 20:11:38 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:38.995799 | orchestrator | 2025-07-06 20:11:38 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:38.995853 | orchestrator | 2025-07-06 20:11:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:42.054821 | orchestrator | 2025-07-06 20:11:42 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:42.056216 | orchestrator | 2025-07-06 20:11:42 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:42.057965 | orchestrator | 2025-07-06 20:11:42 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:42.058073 | orchestrator | 2025-07-06 20:11:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:45.110570 | orchestrator | 2025-07-06 20:11:45 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:45.112312 | orchestrator | 2025-07-06 20:11:45 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:45.113801 | orchestrator | 2025-07-06 20:11:45 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:45.113854 | orchestrator | 2025-07-06 20:11:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:48.145631 | orchestrator | 2025-07-06 20:11:48 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:48.145763 | orchestrator | 2025-07-06 20:11:48 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:48.146376 | orchestrator | 2025-07-06 20:11:48 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:48.146448 | orchestrator | 2025-07-06 20:11:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:51.195012 | orchestrator | 2025-07-06 20:11:51 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:51.197677 | orchestrator | 2025-07-06 20:11:51 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:51.201524 | orchestrator | 2025-07-06 20:11:51 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:51.201599 | orchestrator | 2025-07-06 20:11:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:54.246682 | orchestrator | 2025-07-06 20:11:54 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:54.248504 | orchestrator | 2025-07-06 20:11:54 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:54.251478 | orchestrator | 2025-07-06 20:11:54 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:54.251550 | orchestrator | 2025-07-06 20:11:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:11:57.286722 | orchestrator | 2025-07-06 20:11:57 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:11:57.286995 | orchestrator | 2025-07-06 20:11:57 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:11:57.290068 | orchestrator | 2025-07-06 20:11:57 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:11:57.290084 | orchestrator | 2025-07-06 20:11:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:00.333821 | orchestrator | 2025-07-06 20:12:00 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:12:00.335729 | orchestrator | 2025-07-06 20:12:00 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:00.336727 | orchestrator | 2025-07-06 20:12:00 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:00.336870 | orchestrator | 2025-07-06 20:12:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:03.383452 | orchestrator | 2025-07-06 20:12:03 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:12:03.383564 | orchestrator | 2025-07-06 20:12:03 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:03.384437 | orchestrator | 2025-07-06 20:12:03 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:03.384462 | orchestrator | 2025-07-06 20:12:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:06.419931 | orchestrator | 2025-07-06 20:12:06 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:12:06.422658 | orchestrator | 2025-07-06 20:12:06 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:06.424541 | orchestrator | 2025-07-06 20:12:06 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:06.424582 | orchestrator | 2025-07-06 20:12:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:09.480892 | orchestrator | 2025-07-06 20:12:09 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:12:09.481252 | orchestrator | 2025-07-06 20:12:09 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:09.482339 | orchestrator | 2025-07-06 20:12:09 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:09.482563 | orchestrator | 2025-07-06 20:12:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:12.537214 | orchestrator | 2025-07-06 20:12:12 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:12:12.539222 | orchestrator | 2025-07-06 20:12:12 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:12.541268 | orchestrator | 2025-07-06 20:12:12 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:12.541303 | orchestrator | 2025-07-06 20:12:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:15.588136 | orchestrator | 2025-07-06 20:12:15 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:12:15.588786 | orchestrator | 2025-07-06 20:12:15 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:15.589499 | orchestrator | 2025-07-06 20:12:15 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:15.589523 | orchestrator | 2025-07-06 20:12:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:18.637071 | orchestrator | 2025-07-06 20:12:18 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:12:18.637165 | orchestrator | 2025-07-06 20:12:18 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:18.637181 | orchestrator | 2025-07-06 20:12:18 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:18.637202 | orchestrator | 2025-07-06 20:12:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:21.681789 | orchestrator | 2025-07-06 20:12:21 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:12:21.682945 | orchestrator | 2025-07-06 20:12:21 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:21.683977 | orchestrator | 2025-07-06 20:12:21 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:21.684005 | orchestrator | 2025-07-06 20:12:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:24.724781 | orchestrator | 2025-07-06 20:12:24 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state STARTED 2025-07-06 20:12:24.724882 | orchestrator | 2025-07-06 20:12:24 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:24.724896 | orchestrator | 2025-07-06 20:12:24 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:24.724938 | orchestrator | 2025-07-06 20:12:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:27.771595 | orchestrator | 2025-07-06 20:12:27 | INFO  | Task e275f502-66db-4ab5-b392-e53a2466890f is in state SUCCESS 2025-07-06 20:12:27.772550 | orchestrator | 2025-07-06 20:12:27.772592 | orchestrator | 2025-07-06 20:12:27.772606 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:12:27.772618 | orchestrator | 2025-07-06 20:12:27.772630 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:12:27.772641 | orchestrator | Sunday 06 July 2025 20:09:57 +0000 (0:00:00.195) 0:00:00.195 *********** 2025-07-06 20:12:27.773563 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.773605 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.773616 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.773627 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:27.773638 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:27.773649 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:27.773660 | orchestrator | 2025-07-06 20:12:27.773671 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:12:27.773683 | orchestrator | Sunday 06 July 2025 20:09:58 +0000 (0:00:00.997) 0:00:01.193 *********** 2025-07-06 20:12:27.773694 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-07-06 20:12:27.773705 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-07-06 20:12:27.773716 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-07-06 20:12:27.773726 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-07-06 20:12:27.773737 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-07-06 20:12:27.773748 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-07-06 20:12:27.773758 | orchestrator | 2025-07-06 20:12:27.773769 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-07-06 20:12:27.773780 | orchestrator | 2025-07-06 20:12:27.773791 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-07-06 20:12:27.773802 | orchestrator | Sunday 06 July 2025 20:10:00 +0000 (0:00:01.049) 0:00:02.242 *********** 2025-07-06 20:12:27.773814 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:12:27.773826 | orchestrator | 2025-07-06 20:12:27.773837 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-07-06 20:12:27.773847 | orchestrator | Sunday 06 July 2025 20:10:01 +0000 (0:00:01.304) 0:00:03.547 *********** 2025-07-06 20:12:27.773861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.773875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.773892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.773925 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.773936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.773948 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.773959 | orchestrator | 2025-07-06 20:12:27.773988 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-07-06 20:12:27.774000 | orchestrator | Sunday 06 July 2025 20:10:03 +0000 (0:00:01.815) 0:00:05.362 *********** 2025-07-06 20:12:27.774011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774465 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774518 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774541 | orchestrator | 2025-07-06 20:12:27.774552 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-07-06 20:12:27.774563 | orchestrator | Sunday 06 July 2025 20:10:04 +0000 (0:00:01.649) 0:00:07.011 *********** 2025-07-06 20:12:27.774574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774642 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774652 | orchestrator | 2025-07-06 20:12:27.774664 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-07-06 20:12:27.774675 | orchestrator | Sunday 06 July 2025 20:10:05 +0000 (0:00:01.181) 0:00:08.193 *********** 2025-07-06 20:12:27.774686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774752 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774763 | orchestrator | 2025-07-06 20:12:27.774781 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-07-06 20:12:27.774792 | orchestrator | Sunday 06 July 2025 20:10:07 +0000 (0:00:01.675) 0:00:09.869 *********** 2025-07-06 20:12:27.774803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774837 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.774881 | orchestrator | 2025-07-06 20:12:27.774892 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-07-06 20:12:27.774903 | orchestrator | Sunday 06 July 2025 20:10:09 +0000 (0:00:01.644) 0:00:11.513 *********** 2025-07-06 20:12:27.774914 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:27.774925 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:27.774936 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:27.774946 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:27.774957 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:27.774967 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:27.774978 | orchestrator | 2025-07-06 20:12:27.774989 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-07-06 20:12:27.775000 | orchestrator | Sunday 06 July 2025 20:10:11 +0000 (0:00:02.391) 0:00:13.905 *********** 2025-07-06 20:12:27.775011 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-07-06 20:12:27.775026 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-07-06 20:12:27.775039 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-07-06 20:12:27.775051 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-07-06 20:12:27.775063 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-07-06 20:12:27.775075 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-07-06 20:12:27.775088 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:12:27.775101 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:12:27.775120 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:12:27.775132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:12:27.775145 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:12:27.775158 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-06 20:12:27.775170 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:12:27.775184 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:12:27.775197 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:12:27.775210 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:12:27.775232 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:12:27.775243 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-06 20:12:27.775254 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:12:27.775265 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:12:27.775276 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:12:27.775286 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:12:27.775297 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:12:27.775307 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-06 20:12:27.775318 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:12:27.775329 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:12:27.775340 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:12:27.775350 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:12:27.775421 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:12:27.775433 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-06 20:12:27.775444 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:12:27.775460 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:12:27.775471 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:12:27.775482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:12:27.775491 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:12:27.775501 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-06 20:12:27.775510 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-06 20:12:27.775520 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-06 20:12:27.775529 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-06 20:12:27.775539 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-06 20:12:27.775548 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-06 20:12:27.775558 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-06 20:12:27.775567 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-07-06 20:12:27.775578 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-07-06 20:12:27.775594 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-07-06 20:12:27.775610 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-07-06 20:12:27.775620 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-07-06 20:12:27.775629 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-06 20:12:27.775639 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-07-06 20:12:27.775648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-06 20:12:27.775658 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-06 20:12:27.775667 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-06 20:12:27.775677 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-06 20:12:27.775686 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-06 20:12:27.775696 | orchestrator | 2025-07-06 20:12:27.775705 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:12:27.775715 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:19.483) 0:00:33.388 *********** 2025-07-06 20:12:27.775725 | orchestrator | 2025-07-06 20:12:27.775734 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:12:27.775744 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.064) 0:00:33.453 *********** 2025-07-06 20:12:27.775753 | orchestrator | 2025-07-06 20:12:27.775763 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:12:27.775772 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.064) 0:00:33.517 *********** 2025-07-06 20:12:27.775781 | orchestrator | 2025-07-06 20:12:27.775791 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:12:27.775800 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.063) 0:00:33.581 *********** 2025-07-06 20:12:27.775810 | orchestrator | 2025-07-06 20:12:27.775820 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:12:27.775829 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.063) 0:00:33.645 *********** 2025-07-06 20:12:27.775838 | orchestrator | 2025-07-06 20:12:27.775848 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-06 20:12:27.775857 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.064) 0:00:33.709 *********** 2025-07-06 20:12:27.775867 | orchestrator | 2025-07-06 20:12:27.775876 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-07-06 20:12:27.775886 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:00.067) 0:00:33.776 *********** 2025-07-06 20:12:27.775895 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:12:27.775905 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.775914 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.775923 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:12:27.775933 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.775942 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:12:27.775952 | orchestrator | 2025-07-06 20:12:27.775965 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-07-06 20:12:27.775976 | orchestrator | Sunday 06 July 2025 20:10:33 +0000 (0:00:01.734) 0:00:35.510 *********** 2025-07-06 20:12:27.775985 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:27.775995 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:27.776005 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:27.776014 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:12:27.776030 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:12:27.776039 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:12:27.776049 | orchestrator | 2025-07-06 20:12:27.776059 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-07-06 20:12:27.776068 | orchestrator | 2025-07-06 20:12:27.776078 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-06 20:12:27.776087 | orchestrator | Sunday 06 July 2025 20:11:14 +0000 (0:00:41.080) 0:01:16.591 *********** 2025-07-06 20:12:27.776097 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:27.776107 | orchestrator | 2025-07-06 20:12:27.776116 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-06 20:12:27.776126 | orchestrator | Sunday 06 July 2025 20:11:14 +0000 (0:00:00.528) 0:01:17.119 *********** 2025-07-06 20:12:27.776135 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:27.776145 | orchestrator | 2025-07-06 20:12:27.776154 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-07-06 20:12:27.776164 | orchestrator | Sunday 06 July 2025 20:11:15 +0000 (0:00:00.776) 0:01:17.896 *********** 2025-07-06 20:12:27.776173 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.776183 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.776192 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.776202 | orchestrator | 2025-07-06 20:12:27.776211 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-07-06 20:12:27.776221 | orchestrator | Sunday 06 July 2025 20:11:16 +0000 (0:00:00.901) 0:01:18.798 *********** 2025-07-06 20:12:27.776230 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.776240 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.776249 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.776264 | orchestrator | 2025-07-06 20:12:27.776274 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-07-06 20:12:27.776284 | orchestrator | Sunday 06 July 2025 20:11:16 +0000 (0:00:00.333) 0:01:19.131 *********** 2025-07-06 20:12:27.776293 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.776303 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.776312 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.776322 | orchestrator | 2025-07-06 20:12:27.776331 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-07-06 20:12:27.776341 | orchestrator | Sunday 06 July 2025 20:11:17 +0000 (0:00:00.371) 0:01:19.503 *********** 2025-07-06 20:12:27.776351 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.776411 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.776421 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.776431 | orchestrator | 2025-07-06 20:12:27.776440 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-07-06 20:12:27.776450 | orchestrator | Sunday 06 July 2025 20:11:17 +0000 (0:00:00.681) 0:01:20.184 *********** 2025-07-06 20:12:27.776459 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.776469 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.776478 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.776487 | orchestrator | 2025-07-06 20:12:27.776497 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-07-06 20:12:27.776506 | orchestrator | Sunday 06 July 2025 20:11:18 +0000 (0:00:00.557) 0:01:20.741 *********** 2025-07-06 20:12:27.776516 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.776525 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.776535 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.776544 | orchestrator | 2025-07-06 20:12:27.776554 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-07-06 20:12:27.776563 | orchestrator | Sunday 06 July 2025 20:11:18 +0000 (0:00:00.294) 0:01:21.036 *********** 2025-07-06 20:12:27.776573 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.776582 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.776592 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.776611 | orchestrator | 2025-07-06 20:12:27.776621 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-07-06 20:12:27.776630 | orchestrator | Sunday 06 July 2025 20:11:19 +0000 (0:00:00.308) 0:01:21.344 *********** 2025-07-06 20:12:27.776640 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.776649 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.776659 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.776668 | orchestrator | 2025-07-06 20:12:27.776678 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-07-06 20:12:27.776700 | orchestrator | Sunday 06 July 2025 20:11:19 +0000 (0:00:00.572) 0:01:21.917 *********** 2025-07-06 20:12:27.776710 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.776730 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.776740 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.776749 | orchestrator | 2025-07-06 20:12:27.776759 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-07-06 20:12:27.776768 | orchestrator | Sunday 06 July 2025 20:11:19 +0000 (0:00:00.302) 0:01:22.219 *********** 2025-07-06 20:12:27.776778 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.776787 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.776797 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.776806 | orchestrator | 2025-07-06 20:12:27.776815 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-07-06 20:12:27.776825 | orchestrator | Sunday 06 July 2025 20:11:20 +0000 (0:00:00.304) 0:01:22.524 *********** 2025-07-06 20:12:27.776835 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.776844 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.776853 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.776863 | orchestrator | 2025-07-06 20:12:27.776871 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-07-06 20:12:27.776883 | orchestrator | Sunday 06 July 2025 20:11:20 +0000 (0:00:00.344) 0:01:22.869 *********** 2025-07-06 20:12:27.776891 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.776899 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.776907 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.776914 | orchestrator | 2025-07-06 20:12:27.776922 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-07-06 20:12:27.776930 | orchestrator | Sunday 06 July 2025 20:11:21 +0000 (0:00:00.562) 0:01:23.431 *********** 2025-07-06 20:12:27.776938 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.776946 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.776953 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.776961 | orchestrator | 2025-07-06 20:12:27.776969 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-07-06 20:12:27.776976 | orchestrator | Sunday 06 July 2025 20:11:21 +0000 (0:00:00.308) 0:01:23.740 *********** 2025-07-06 20:12:27.776984 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.776992 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.777000 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.777007 | orchestrator | 2025-07-06 20:12:27.777015 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-07-06 20:12:27.777023 | orchestrator | Sunday 06 July 2025 20:11:21 +0000 (0:00:00.343) 0:01:24.084 *********** 2025-07-06 20:12:27.777031 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.777038 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.777046 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.777054 | orchestrator | 2025-07-06 20:12:27.777062 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-07-06 20:12:27.777069 | orchestrator | Sunday 06 July 2025 20:11:22 +0000 (0:00:00.350) 0:01:24.435 *********** 2025-07-06 20:12:27.777077 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.777085 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.777093 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.777106 | orchestrator | 2025-07-06 20:12:27.777114 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-07-06 20:12:27.777121 | orchestrator | Sunday 06 July 2025 20:11:22 +0000 (0:00:00.578) 0:01:25.013 *********** 2025-07-06 20:12:27.777129 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.777137 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.777150 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.777158 | orchestrator | 2025-07-06 20:12:27.777166 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-06 20:12:27.777174 | orchestrator | Sunday 06 July 2025 20:11:23 +0000 (0:00:00.277) 0:01:25.291 *********** 2025-07-06 20:12:27.777182 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:12:27.777189 | orchestrator | 2025-07-06 20:12:27.777197 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-07-06 20:12:27.777205 | orchestrator | Sunday 06 July 2025 20:11:23 +0000 (0:00:00.551) 0:01:25.843 *********** 2025-07-06 20:12:27.777213 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.777221 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.777229 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.777236 | orchestrator | 2025-07-06 20:12:27.777244 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-07-06 20:12:27.777252 | orchestrator | Sunday 06 July 2025 20:11:24 +0000 (0:00:00.816) 0:01:26.659 *********** 2025-07-06 20:12:27.777260 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.777268 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.777275 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.777283 | orchestrator | 2025-07-06 20:12:27.777291 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-07-06 20:12:27.777299 | orchestrator | Sunday 06 July 2025 20:11:25 +0000 (0:00:00.655) 0:01:27.314 *********** 2025-07-06 20:12:27.777307 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.777315 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.777322 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.777330 | orchestrator | 2025-07-06 20:12:27.777338 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-07-06 20:12:27.777346 | orchestrator | Sunday 06 July 2025 20:11:25 +0000 (0:00:00.617) 0:01:27.932 *********** 2025-07-06 20:12:27.777370 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.777382 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.777395 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.777409 | orchestrator | 2025-07-06 20:12:27.777418 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-07-06 20:12:27.777426 | orchestrator | Sunday 06 July 2025 20:11:26 +0000 (0:00:00.631) 0:01:28.563 *********** 2025-07-06 20:12:27.777433 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.777441 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.777449 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.777456 | orchestrator | 2025-07-06 20:12:27.777464 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-07-06 20:12:27.777472 | orchestrator | Sunday 06 July 2025 20:11:27 +0000 (0:00:01.051) 0:01:29.615 *********** 2025-07-06 20:12:27.777480 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.777488 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.777495 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.777503 | orchestrator | 2025-07-06 20:12:27.777511 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-07-06 20:12:27.777519 | orchestrator | Sunday 06 July 2025 20:11:27 +0000 (0:00:00.443) 0:01:30.059 *********** 2025-07-06 20:12:27.777526 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.777534 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.777542 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.777550 | orchestrator | 2025-07-06 20:12:27.777557 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-07-06 20:12:27.777571 | orchestrator | Sunday 06 July 2025 20:11:28 +0000 (0:00:00.394) 0:01:30.453 *********** 2025-07-06 20:12:27.777579 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.777587 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.777594 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.777602 | orchestrator | 2025-07-06 20:12:27.777610 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-06 20:12:27.777622 | orchestrator | Sunday 06 July 2025 20:11:28 +0000 (0:00:00.308) 0:01:30.761 *********** 2025-07-06 20:12:27.777630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777725 | orchestrator | 2025-07-06 20:12:27.777733 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-06 20:12:27.777741 | orchestrator | Sunday 06 July 2025 20:11:30 +0000 (0:00:01.662) 0:01:32.424 *********** 2025-07-06 20:12:27.777754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777837 | orchestrator | 2025-07-06 20:12:27.777845 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-06 20:12:27.777853 | orchestrator | Sunday 06 July 2025 20:11:34 +0000 (0:00:03.886) 0:01:36.311 *********** 2025-07-06 20:12:27.777861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.777949 | orchestrator | 2025-07-06 20:12:27.777957 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:12:27.777965 | orchestrator | Sunday 06 July 2025 20:11:36 +0000 (0:00:02.008) 0:01:38.319 *********** 2025-07-06 20:12:27.777973 | orchestrator | 2025-07-06 20:12:27.777981 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:12:27.777989 | orchestrator | Sunday 06 July 2025 20:11:36 +0000 (0:00:00.068) 0:01:38.387 *********** 2025-07-06 20:12:27.777997 | orchestrator | 2025-07-06 20:12:27.778004 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:12:27.778037 | orchestrator | Sunday 06 July 2025 20:11:36 +0000 (0:00:00.069) 0:01:38.457 *********** 2025-07-06 20:12:27.778047 | orchestrator | 2025-07-06 20:12:27.778055 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-06 20:12:27.778063 | orchestrator | Sunday 06 July 2025 20:11:36 +0000 (0:00:00.068) 0:01:38.526 *********** 2025-07-06 20:12:27.778071 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:27.778078 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:27.778086 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:27.778094 | orchestrator | 2025-07-06 20:12:27.778102 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-06 20:12:27.778110 | orchestrator | Sunday 06 July 2025 20:11:38 +0000 (0:00:02.499) 0:01:41.025 *********** 2025-07-06 20:12:27.778117 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:27.778125 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:27.778133 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:27.778141 | orchestrator | 2025-07-06 20:12:27.778149 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-06 20:12:27.778156 | orchestrator | Sunday 06 July 2025 20:11:45 +0000 (0:00:06.761) 0:01:47.787 *********** 2025-07-06 20:12:27.778164 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:27.778172 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:27.778186 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:27.778194 | orchestrator | 2025-07-06 20:12:27.778202 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-06 20:12:27.778210 | orchestrator | Sunday 06 July 2025 20:11:48 +0000 (0:00:02.624) 0:01:50.411 *********** 2025-07-06 20:12:27.778218 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.778225 | orchestrator | 2025-07-06 20:12:27.778233 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-06 20:12:27.778241 | orchestrator | Sunday 06 July 2025 20:11:48 +0000 (0:00:00.118) 0:01:50.530 *********** 2025-07-06 20:12:27.778249 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.778256 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.778264 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.778272 | orchestrator | 2025-07-06 20:12:27.778279 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-06 20:12:27.778287 | orchestrator | Sunday 06 July 2025 20:11:49 +0000 (0:00:00.888) 0:01:51.418 *********** 2025-07-06 20:12:27.778295 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.778303 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.778310 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:27.778318 | orchestrator | 2025-07-06 20:12:27.778326 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-06 20:12:27.778334 | orchestrator | Sunday 06 July 2025 20:11:50 +0000 (0:00:00.849) 0:01:52.268 *********** 2025-07-06 20:12:27.778342 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.778349 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.778380 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.778394 | orchestrator | 2025-07-06 20:12:27.778402 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-06 20:12:27.778410 | orchestrator | Sunday 06 July 2025 20:11:50 +0000 (0:00:00.912) 0:01:53.180 *********** 2025-07-06 20:12:27.778425 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.778433 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.778441 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:27.778448 | orchestrator | 2025-07-06 20:12:27.778456 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-06 20:12:27.778464 | orchestrator | Sunday 06 July 2025 20:11:51 +0000 (0:00:00.599) 0:01:53.780 *********** 2025-07-06 20:12:27.778472 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.778480 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.778492 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.778500 | orchestrator | 2025-07-06 20:12:27.778508 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-06 20:12:27.778516 | orchestrator | Sunday 06 July 2025 20:11:52 +0000 (0:00:00.976) 0:01:54.757 *********** 2025-07-06 20:12:27.778524 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.778532 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.778540 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.778548 | orchestrator | 2025-07-06 20:12:27.778556 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-07-06 20:12:27.778564 | orchestrator | Sunday 06 July 2025 20:11:53 +0000 (0:00:01.254) 0:01:56.011 *********** 2025-07-06 20:12:27.778572 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.778579 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.778587 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.778595 | orchestrator | 2025-07-06 20:12:27.778603 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-06 20:12:27.778610 | orchestrator | Sunday 06 July 2025 20:11:54 +0000 (0:00:00.315) 0:01:56.326 *********** 2025-07-06 20:12:27.778619 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778627 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778635 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778643 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778656 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778665 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778678 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778687 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778700 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778708 | orchestrator | 2025-07-06 20:12:27.778716 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-06 20:12:27.778724 | orchestrator | Sunday 06 July 2025 20:11:55 +0000 (0:00:01.584) 0:01:57.911 *********** 2025-07-06 20:12:27.778732 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778741 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778749 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778757 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778790 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778815 | orchestrator | 2025-07-06 20:12:27.778823 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-06 20:12:27.778832 | orchestrator | Sunday 06 July 2025 20:11:59 +0000 (0:00:03.994) 0:02:01.905 *********** 2025-07-06 20:12:27.778844 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778853 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778861 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778869 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778899 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:12:27.778984 | orchestrator | 2025-07-06 20:12:27.778992 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:12:27.779000 | orchestrator | Sunday 06 July 2025 20:12:02 +0000 (0:00:03.108) 0:02:05.014 *********** 2025-07-06 20:12:27.779008 | orchestrator | 2025-07-06 20:12:27.779016 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:12:27.779024 | orchestrator | Sunday 06 July 2025 20:12:02 +0000 (0:00:00.065) 0:02:05.079 *********** 2025-07-06 20:12:27.779032 | orchestrator | 2025-07-06 20:12:27.779039 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-06 20:12:27.779047 | orchestrator | Sunday 06 July 2025 20:12:02 +0000 (0:00:00.082) 0:02:05.161 *********** 2025-07-06 20:12:27.779055 | orchestrator | 2025-07-06 20:12:27.779063 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-06 20:12:27.779070 | orchestrator | Sunday 06 July 2025 20:12:03 +0000 (0:00:00.066) 0:02:05.228 *********** 2025-07-06 20:12:27.779078 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:27.779086 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:27.779094 | orchestrator | 2025-07-06 20:12:27.779106 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-06 20:12:27.779115 | orchestrator | Sunday 06 July 2025 20:12:09 +0000 (0:00:06.347) 0:02:11.575 *********** 2025-07-06 20:12:27.779122 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:27.779130 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:27.779138 | orchestrator | 2025-07-06 20:12:27.779146 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-06 20:12:27.779153 | orchestrator | Sunday 06 July 2025 20:12:15 +0000 (0:00:06.150) 0:02:17.726 *********** 2025-07-06 20:12:27.779161 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:12:27.779169 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:12:27.779177 | orchestrator | 2025-07-06 20:12:27.779185 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-06 20:12:27.779193 | orchestrator | Sunday 06 July 2025 20:12:21 +0000 (0:00:06.147) 0:02:23.874 *********** 2025-07-06 20:12:27.779201 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:12:27.779209 | orchestrator | 2025-07-06 20:12:27.779216 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-06 20:12:27.779224 | orchestrator | Sunday 06 July 2025 20:12:21 +0000 (0:00:00.136) 0:02:24.010 *********** 2025-07-06 20:12:27.779232 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.779240 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.779247 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.779255 | orchestrator | 2025-07-06 20:12:27.779263 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-06 20:12:27.779276 | orchestrator | Sunday 06 July 2025 20:12:22 +0000 (0:00:00.969) 0:02:24.980 *********** 2025-07-06 20:12:27.779284 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.779292 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.779300 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:27.779307 | orchestrator | 2025-07-06 20:12:27.779315 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-06 20:12:27.779323 | orchestrator | Sunday 06 July 2025 20:12:23 +0000 (0:00:00.684) 0:02:25.665 *********** 2025-07-06 20:12:27.779331 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.779338 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.779346 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.779397 | orchestrator | 2025-07-06 20:12:27.779408 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-06 20:12:27.779416 | orchestrator | Sunday 06 July 2025 20:12:24 +0000 (0:00:00.745) 0:02:26.410 *********** 2025-07-06 20:12:27.779424 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:12:27.779432 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:12:27.779440 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:12:27.779448 | orchestrator | 2025-07-06 20:12:27.779456 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-06 20:12:27.779464 | orchestrator | Sunday 06 July 2025 20:12:24 +0000 (0:00:00.588) 0:02:26.999 *********** 2025-07-06 20:12:27.779472 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.779480 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.779487 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.779495 | orchestrator | 2025-07-06 20:12:27.779503 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-06 20:12:27.779511 | orchestrator | Sunday 06 July 2025 20:12:25 +0000 (0:00:01.194) 0:02:28.194 *********** 2025-07-06 20:12:27.779519 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:12:27.779527 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:12:27.779535 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:12:27.779542 | orchestrator | 2025-07-06 20:12:27.779550 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:12:27.779558 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-06 20:12:27.779572 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-06 20:12:27.779580 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-06 20:12:27.779588 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:12:27.779596 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:12:27.779604 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:12:27.779612 | orchestrator | 2025-07-06 20:12:27.779620 | orchestrator | 2025-07-06 20:12:27.779628 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:12:27.779636 | orchestrator | Sunday 06 July 2025 20:12:26 +0000 (0:00:00.826) 0:02:29.021 *********** 2025-07-06 20:12:27.779644 | orchestrator | =============================================================================== 2025-07-06 20:12:27.779651 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 41.08s 2025-07-06 20:12:27.779659 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.48s 2025-07-06 20:12:27.779667 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.91s 2025-07-06 20:12:27.779675 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.85s 2025-07-06 20:12:27.779689 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.77s 2025-07-06 20:12:27.779697 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.99s 2025-07-06 20:12:27.779705 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.89s 2025-07-06 20:12:27.779717 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.11s 2025-07-06 20:12:27.779725 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.39s 2025-07-06 20:12:27.779733 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.01s 2025-07-06 20:12:27.779740 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.82s 2025-07-06 20:12:27.779748 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.73s 2025-07-06 20:12:27.779756 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.68s 2025-07-06 20:12:27.779764 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.66s 2025-07-06 20:12:27.779771 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.65s 2025-07-06 20:12:27.779779 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.64s 2025-07-06 20:12:27.779787 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.58s 2025-07-06 20:12:27.779795 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.30s 2025-07-06 20:12:27.779802 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.25s 2025-07-06 20:12:27.779810 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.19s 2025-07-06 20:12:27.779818 | orchestrator | 2025-07-06 20:12:27 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:27.779826 | orchestrator | 2025-07-06 20:12:27 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:27.779834 | orchestrator | 2025-07-06 20:12:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:30.813766 | orchestrator | 2025-07-06 20:12:30 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:30.815965 | orchestrator | 2025-07-06 20:12:30 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:30.816001 | orchestrator | 2025-07-06 20:12:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:33.867625 | orchestrator | 2025-07-06 20:12:33 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:33.870014 | orchestrator | 2025-07-06 20:12:33 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:33.870124 | orchestrator | 2025-07-06 20:12:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:36.924663 | orchestrator | 2025-07-06 20:12:36 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:36.926003 | orchestrator | 2025-07-06 20:12:36 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:36.926181 | orchestrator | 2025-07-06 20:12:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:39.976650 | orchestrator | 2025-07-06 20:12:39 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:39.977142 | orchestrator | 2025-07-06 20:12:39 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:39.977576 | orchestrator | 2025-07-06 20:12:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:43.020201 | orchestrator | 2025-07-06 20:12:43 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:43.020302 | orchestrator | 2025-07-06 20:12:43 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:43.020393 | orchestrator | 2025-07-06 20:12:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:46.068818 | orchestrator | 2025-07-06 20:12:46 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:46.068936 | orchestrator | 2025-07-06 20:12:46 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:46.068956 | orchestrator | 2025-07-06 20:12:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:49.112500 | orchestrator | 2025-07-06 20:12:49 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:49.112925 | orchestrator | 2025-07-06 20:12:49 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:49.112962 | orchestrator | 2025-07-06 20:12:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:52.162309 | orchestrator | 2025-07-06 20:12:52 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:52.162444 | orchestrator | 2025-07-06 20:12:52 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:52.162461 | orchestrator | 2025-07-06 20:12:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:55.206718 | orchestrator | 2025-07-06 20:12:55 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:55.206935 | orchestrator | 2025-07-06 20:12:55 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:55.207051 | orchestrator | 2025-07-06 20:12:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:12:58.248052 | orchestrator | 2025-07-06 20:12:58 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:12:58.248194 | orchestrator | 2025-07-06 20:12:58 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:12:58.248210 | orchestrator | 2025-07-06 20:12:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:01.293685 | orchestrator | 2025-07-06 20:13:01 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:01.295279 | orchestrator | 2025-07-06 20:13:01 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:01.295319 | orchestrator | 2025-07-06 20:13:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:04.346695 | orchestrator | 2025-07-06 20:13:04 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:04.348615 | orchestrator | 2025-07-06 20:13:04 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:04.348663 | orchestrator | 2025-07-06 20:13:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:07.392375 | orchestrator | 2025-07-06 20:13:07 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:07.392477 | orchestrator | 2025-07-06 20:13:07 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:07.392493 | orchestrator | 2025-07-06 20:13:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:10.444167 | orchestrator | 2025-07-06 20:13:10 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:10.446490 | orchestrator | 2025-07-06 20:13:10 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:10.446526 | orchestrator | 2025-07-06 20:13:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:13.483744 | orchestrator | 2025-07-06 20:13:13 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:13.485858 | orchestrator | 2025-07-06 20:13:13 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:13.486252 | orchestrator | 2025-07-06 20:13:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:16.533420 | orchestrator | 2025-07-06 20:13:16 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:16.535342 | orchestrator | 2025-07-06 20:13:16 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:16.535394 | orchestrator | 2025-07-06 20:13:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:19.593887 | orchestrator | 2025-07-06 20:13:19 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:19.593996 | orchestrator | 2025-07-06 20:13:19 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:19.594011 | orchestrator | 2025-07-06 20:13:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:22.635607 | orchestrator | 2025-07-06 20:13:22 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:22.635955 | orchestrator | 2025-07-06 20:13:22 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:22.635987 | orchestrator | 2025-07-06 20:13:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:25.672830 | orchestrator | 2025-07-06 20:13:25 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:25.674409 | orchestrator | 2025-07-06 20:13:25 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:25.674776 | orchestrator | 2025-07-06 20:13:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:28.718779 | orchestrator | 2025-07-06 20:13:28 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:28.721870 | orchestrator | 2025-07-06 20:13:28 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:28.721909 | orchestrator | 2025-07-06 20:13:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:31.780384 | orchestrator | 2025-07-06 20:13:31 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:31.783266 | orchestrator | 2025-07-06 20:13:31 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:31.783781 | orchestrator | 2025-07-06 20:13:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:34.833582 | orchestrator | 2025-07-06 20:13:34 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:34.835330 | orchestrator | 2025-07-06 20:13:34 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:34.835548 | orchestrator | 2025-07-06 20:13:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:37.890498 | orchestrator | 2025-07-06 20:13:37 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:37.891328 | orchestrator | 2025-07-06 20:13:37 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:37.891378 | orchestrator | 2025-07-06 20:13:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:40.939430 | orchestrator | 2025-07-06 20:13:40 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:40.939629 | orchestrator | 2025-07-06 20:13:40 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:40.939664 | orchestrator | 2025-07-06 20:13:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:43.995246 | orchestrator | 2025-07-06 20:13:43 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:43.996717 | orchestrator | 2025-07-06 20:13:43 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:43.998565 | orchestrator | 2025-07-06 20:13:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:47.058332 | orchestrator | 2025-07-06 20:13:47 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:47.060713 | orchestrator | 2025-07-06 20:13:47 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:47.060746 | orchestrator | 2025-07-06 20:13:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:50.106336 | orchestrator | 2025-07-06 20:13:50 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:50.108261 | orchestrator | 2025-07-06 20:13:50 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:50.108290 | orchestrator | 2025-07-06 20:13:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:53.166544 | orchestrator | 2025-07-06 20:13:53 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:53.167074 | orchestrator | 2025-07-06 20:13:53 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:53.167710 | orchestrator | 2025-07-06 20:13:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:56.207874 | orchestrator | 2025-07-06 20:13:56 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:56.209380 | orchestrator | 2025-07-06 20:13:56 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:56.209421 | orchestrator | 2025-07-06 20:13:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:13:59.259372 | orchestrator | 2025-07-06 20:13:59 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:13:59.260642 | orchestrator | 2025-07-06 20:13:59 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:13:59.260674 | orchestrator | 2025-07-06 20:13:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:02.298196 | orchestrator | 2025-07-06 20:14:02 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:02.300600 | orchestrator | 2025-07-06 20:14:02 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:02.300881 | orchestrator | 2025-07-06 20:14:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:05.352527 | orchestrator | 2025-07-06 20:14:05 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:05.352684 | orchestrator | 2025-07-06 20:14:05 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:05.352713 | orchestrator | 2025-07-06 20:14:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:08.399721 | orchestrator | 2025-07-06 20:14:08 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:08.404534 | orchestrator | 2025-07-06 20:14:08 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:08.404611 | orchestrator | 2025-07-06 20:14:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:11.451555 | orchestrator | 2025-07-06 20:14:11 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:11.455203 | orchestrator | 2025-07-06 20:14:11 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:11.455769 | orchestrator | 2025-07-06 20:14:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:14.500701 | orchestrator | 2025-07-06 20:14:14 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:14.502744 | orchestrator | 2025-07-06 20:14:14 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:14.502787 | orchestrator | 2025-07-06 20:14:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:17.550950 | orchestrator | 2025-07-06 20:14:17 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:17.552715 | orchestrator | 2025-07-06 20:14:17 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:17.552777 | orchestrator | 2025-07-06 20:14:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:20.595027 | orchestrator | 2025-07-06 20:14:20 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:20.597170 | orchestrator | 2025-07-06 20:14:20 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:20.597203 | orchestrator | 2025-07-06 20:14:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:23.643413 | orchestrator | 2025-07-06 20:14:23 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:23.645266 | orchestrator | 2025-07-06 20:14:23 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:23.646068 | orchestrator | 2025-07-06 20:14:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:26.696523 | orchestrator | 2025-07-06 20:14:26 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:26.696625 | orchestrator | 2025-07-06 20:14:26 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:26.696640 | orchestrator | 2025-07-06 20:14:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:29.745591 | orchestrator | 2025-07-06 20:14:29 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:29.745901 | orchestrator | 2025-07-06 20:14:29 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:29.746189 | orchestrator | 2025-07-06 20:14:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:32.796732 | orchestrator | 2025-07-06 20:14:32 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:32.796926 | orchestrator | 2025-07-06 20:14:32 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:32.796962 | orchestrator | 2025-07-06 20:14:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:35.846116 | orchestrator | 2025-07-06 20:14:35 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:35.848327 | orchestrator | 2025-07-06 20:14:35 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:35.848358 | orchestrator | 2025-07-06 20:14:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:38.900676 | orchestrator | 2025-07-06 20:14:38 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:38.902081 | orchestrator | 2025-07-06 20:14:38 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:38.902252 | orchestrator | 2025-07-06 20:14:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:41.938853 | orchestrator | 2025-07-06 20:14:41 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:41.940375 | orchestrator | 2025-07-06 20:14:41 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:41.940410 | orchestrator | 2025-07-06 20:14:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:44.981891 | orchestrator | 2025-07-06 20:14:44 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:44.983160 | orchestrator | 2025-07-06 20:14:44 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:44.983197 | orchestrator | 2025-07-06 20:14:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:48.029183 | orchestrator | 2025-07-06 20:14:48 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:48.032069 | orchestrator | 2025-07-06 20:14:48 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:48.032102 | orchestrator | 2025-07-06 20:14:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:51.069623 | orchestrator | 2025-07-06 20:14:51 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:51.070950 | orchestrator | 2025-07-06 20:14:51 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:51.070980 | orchestrator | 2025-07-06 20:14:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:54.122402 | orchestrator | 2025-07-06 20:14:54 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:54.124858 | orchestrator | 2025-07-06 20:14:54 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:54.124933 | orchestrator | 2025-07-06 20:14:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:14:57.171915 | orchestrator | 2025-07-06 20:14:57 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:14:57.174210 | orchestrator | 2025-07-06 20:14:57 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:14:57.174300 | orchestrator | 2025-07-06 20:14:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:00.218588 | orchestrator | 2025-07-06 20:15:00 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:00.219777 | orchestrator | 2025-07-06 20:15:00 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state STARTED 2025-07-06 20:15:00.219829 | orchestrator | 2025-07-06 20:15:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:03.263191 | orchestrator | 2025-07-06 20:15:03 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:03.263329 | orchestrator | 2025-07-06 20:15:03 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:03.263348 | orchestrator | 2025-07-06 20:15:03 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:03.271000 | orchestrator | 2025-07-06 20:15:03.271092 | orchestrator | 2025-07-06 20:15:03.271115 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:15:03.271135 | orchestrator | 2025-07-06 20:15:03.271153 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:15:03.271171 | orchestrator | Sunday 06 July 2025 20:08:52 +0000 (0:00:00.352) 0:00:00.352 *********** 2025-07-06 20:15:03.271189 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.271207 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.271224 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.271241 | orchestrator | 2025-07-06 20:15:03.271437 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:15:03.272125 | orchestrator | Sunday 06 July 2025 20:08:52 +0000 (0:00:00.493) 0:00:00.845 *********** 2025-07-06 20:15:03.272178 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-07-06 20:15:03.272189 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-07-06 20:15:03.272199 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-07-06 20:15:03.272208 | orchestrator | 2025-07-06 20:15:03.272218 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-07-06 20:15:03.272228 | orchestrator | 2025-07-06 20:15:03.272238 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-06 20:15:03.272276 | orchestrator | Sunday 06 July 2025 20:08:53 +0000 (0:00:00.510) 0:00:01.356 *********** 2025-07-06 20:15:03.272287 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.272299 | orchestrator | 2025-07-06 20:15:03.272309 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-07-06 20:15:03.272319 | orchestrator | Sunday 06 July 2025 20:08:54 +0000 (0:00:00.573) 0:00:01.929 *********** 2025-07-06 20:15:03.272328 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.272338 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.272347 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.272357 | orchestrator | 2025-07-06 20:15:03.272367 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-06 20:15:03.272376 | orchestrator | Sunday 06 July 2025 20:08:54 +0000 (0:00:00.857) 0:00:02.787 *********** 2025-07-06 20:15:03.272386 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.272396 | orchestrator | 2025-07-06 20:15:03.272426 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-07-06 20:15:03.272436 | orchestrator | Sunday 06 July 2025 20:08:55 +0000 (0:00:00.722) 0:00:03.509 *********** 2025-07-06 20:15:03.272446 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.272456 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.272465 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.272479 | orchestrator | 2025-07-06 20:15:03.272495 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-07-06 20:15:03.272511 | orchestrator | Sunday 06 July 2025 20:08:56 +0000 (0:00:00.589) 0:00:04.099 *********** 2025-07-06 20:15:03.272525 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:15:03.272540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:15:03.272555 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:15:03.272571 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:15:03.272587 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:15:03.272603 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-06 20:15:03.272620 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-06 20:15:03.272637 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-06 20:15:03.272653 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-06 20:15:03.272663 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-06 20:15:03.272673 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-06 20:15:03.272682 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-06 20:15:03.272692 | orchestrator | 2025-07-06 20:15:03.272701 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-06 20:15:03.272711 | orchestrator | Sunday 06 July 2025 20:08:58 +0000 (0:00:02.662) 0:00:06.761 *********** 2025-07-06 20:15:03.272731 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-06 20:15:03.272744 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-06 20:15:03.272755 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-06 20:15:03.272766 | orchestrator | 2025-07-06 20:15:03.272777 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-06 20:15:03.272788 | orchestrator | Sunday 06 July 2025 20:08:59 +0000 (0:00:00.958) 0:00:07.719 *********** 2025-07-06 20:15:03.272799 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-06 20:15:03.272811 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-06 20:15:03.272823 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-06 20:15:03.272834 | orchestrator | 2025-07-06 20:15:03.272845 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-06 20:15:03.272856 | orchestrator | Sunday 06 July 2025 20:09:01 +0000 (0:00:01.545) 0:00:09.265 *********** 2025-07-06 20:15:03.272867 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-07-06 20:15:03.272878 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.272936 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-07-06 20:15:03.272949 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.272961 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-07-06 20:15:03.272972 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.272985 | orchestrator | 2025-07-06 20:15:03.272996 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-07-06 20:15:03.273006 | orchestrator | Sunday 06 July 2025 20:09:02 +0000 (0:00:01.200) 0:00:10.465 *********** 2025-07-06 20:15:03.273027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.273043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.273054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.273064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.273082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.273115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.273132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.273146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.273163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.273179 | orchestrator | 2025-07-06 20:15:03.273195 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-07-06 20:15:03.273211 | orchestrator | Sunday 06 July 2025 20:09:04 +0000 (0:00:02.074) 0:00:12.540 *********** 2025-07-06 20:15:03.273234 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.273284 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.273301 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.273316 | orchestrator | 2025-07-06 20:15:03.273334 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-07-06 20:15:03.273344 | orchestrator | Sunday 06 July 2025 20:09:06 +0000 (0:00:01.390) 0:00:13.930 *********** 2025-07-06 20:15:03.273354 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-07-06 20:15:03.273364 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-07-06 20:15:03.273384 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-07-06 20:15:03.273393 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-07-06 20:15:03.273403 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-07-06 20:15:03.273412 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-07-06 20:15:03.273422 | orchestrator | 2025-07-06 20:15:03.273432 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-07-06 20:15:03.273441 | orchestrator | Sunday 06 July 2025 20:09:08 +0000 (0:00:02.850) 0:00:16.781 *********** 2025-07-06 20:15:03.273451 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.273460 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.273470 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.273479 | orchestrator | 2025-07-06 20:15:03.273489 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-07-06 20:15:03.273498 | orchestrator | Sunday 06 July 2025 20:09:10 +0000 (0:00:01.779) 0:00:18.561 *********** 2025-07-06 20:15:03.273508 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.273517 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.273527 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.273536 | orchestrator | 2025-07-06 20:15:03.273546 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-07-06 20:15:03.273555 | orchestrator | Sunday 06 July 2025 20:09:12 +0000 (0:00:02.014) 0:00:20.575 *********** 2025-07-06 20:15:03.273566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.276703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.276755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.276767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:15:03.276776 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.276797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.276805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.276814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.276822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.276847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:15:03.276860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.276868 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.276877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.276893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:15:03.276901 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.276909 | orchestrator | 2025-07-06 20:15:03.276918 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-07-06 20:15:03.276927 | orchestrator | Sunday 06 July 2025 20:09:13 +0000 (0:00:00.529) 0:00:21.105 *********** 2025-07-06 20:15:03.276935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.276944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.276961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.276973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.276988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.276996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:15:03.277005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.277013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.277021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:15:03.277039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.277048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.277338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf', '__omit_place_holder__8a1305695cbf548c4894d299b4c725b919c89faf'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-06 20:15:03.277349 | orchestrator | 2025-07-06 20:15:03.277359 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-07-06 20:15:03.277368 | orchestrator | Sunday 06 July 2025 20:09:16 +0000 (0:00:02.855) 0:00:23.960 *********** 2025-07-06 20:15:03.277379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.277389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.277398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.277415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.277430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.277446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.277456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.277464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.277472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.277481 | orchestrator | 2025-07-06 20:15:03.277489 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-07-06 20:15:03.277497 | orchestrator | Sunday 06 July 2025 20:09:19 +0000 (0:00:03.811) 0:00:27.771 *********** 2025-07-06 20:15:03.277505 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-06 20:15:03.277514 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-06 20:15:03.277522 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-06 20:15:03.277530 | orchestrator | 2025-07-06 20:15:03.277538 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-07-06 20:15:03.277546 | orchestrator | Sunday 06 July 2025 20:09:21 +0000 (0:00:02.058) 0:00:29.830 *********** 2025-07-06 20:15:03.277554 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-06 20:15:03.277563 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-06 20:15:03.277576 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-06 20:15:03.277596 | orchestrator | 2025-07-06 20:15:03.277620 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-07-06 20:15:03.277634 | orchestrator | Sunday 06 July 2025 20:09:27 +0000 (0:00:05.475) 0:00:35.305 *********** 2025-07-06 20:15:03.277647 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.277659 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.277673 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.277686 | orchestrator | 2025-07-06 20:15:03.277699 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-07-06 20:15:03.277713 | orchestrator | Sunday 06 July 2025 20:09:27 +0000 (0:00:00.579) 0:00:35.885 *********** 2025-07-06 20:15:03.277732 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-06 20:15:03.277747 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-06 20:15:03.277760 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-06 20:15:03.277773 | orchestrator | 2025-07-06 20:15:03.277787 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-07-06 20:15:03.277801 | orchestrator | Sunday 06 July 2025 20:09:30 +0000 (0:00:02.743) 0:00:38.628 *********** 2025-07-06 20:15:03.277815 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-06 20:15:03.277824 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-06 20:15:03.277832 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-06 20:15:03.277840 | orchestrator | 2025-07-06 20:15:03.277848 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-07-06 20:15:03.277856 | orchestrator | Sunday 06 July 2025 20:09:32 +0000 (0:00:01.806) 0:00:40.435 *********** 2025-07-06 20:15:03.277864 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-07-06 20:15:03.277872 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-07-06 20:15:03.277880 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-07-06 20:15:03.277888 | orchestrator | 2025-07-06 20:15:03.277896 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-07-06 20:15:03.277904 | orchestrator | Sunday 06 July 2025 20:09:33 +0000 (0:00:01.481) 0:00:41.917 *********** 2025-07-06 20:15:03.277911 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-07-06 20:15:03.277919 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-07-06 20:15:03.277927 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-07-06 20:15:03.277935 | orchestrator | 2025-07-06 20:15:03.277943 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-06 20:15:03.277950 | orchestrator | Sunday 06 July 2025 20:09:35 +0000 (0:00:01.808) 0:00:43.726 *********** 2025-07-06 20:15:03.277958 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.277966 | orchestrator | 2025-07-06 20:15:03.277974 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-07-06 20:15:03.277982 | orchestrator | Sunday 06 July 2025 20:09:36 +0000 (0:00:00.910) 0:00:44.637 *********** 2025-07-06 20:15:03.277990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.278006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.282312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.282391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.282407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.282419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.282431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.282531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.282584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.282606 | orchestrator | 2025-07-06 20:15:03.282629 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-07-06 20:15:03.282649 | orchestrator | Sunday 06 July 2025 20:09:39 +0000 (0:00:03.160) 0:00:47.797 *********** 2025-07-06 20:15:03.282683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.282703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.282715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.282727 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.282739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.282751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.282770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.282782 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.282793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.282817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.282830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.282841 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.282852 | orchestrator | 2025-07-06 20:15:03.282864 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-07-06 20:15:03.282875 | orchestrator | Sunday 06 July 2025 20:09:40 +0000 (0:00:00.674) 0:00:48.472 *********** 2025-07-06 20:15:03.282886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.282898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.282916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.282928 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.282939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.282959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.282975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.282987 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.282998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283039 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.283050 | orchestrator | 2025-07-06 20:15:03.283061 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-06 20:15:03.283072 | orchestrator | Sunday 06 July 2025 20:09:42 +0000 (0:00:01.509) 0:00:49.981 *********** 2025-07-06 20:15:03.283083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283132 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.283143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283185 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.283196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283237 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.283306 | orchestrator | 2025-07-06 20:15:03.283320 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-06 20:15:03.283331 | orchestrator | Sunday 06 July 2025 20:09:42 +0000 (0:00:00.719) 0:00:50.701 *********** 2025-07-06 20:15:03.283347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283389 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.283400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283434 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.283454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283502 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.283512 | orchestrator | 2025-07-06 20:15:03.283523 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-06 20:15:03.283536 | orchestrator | Sunday 06 July 2025 20:09:43 +0000 (0:00:00.599) 0:00:51.301 *********** 2025-07-06 20:15:03.283557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro',2025-07-06 20:15:03 | INFO  | Task 0934b857-c2e1-4250-9a1b-b0a86c149407 is in state SUCCESS 2025-07-06 20:15:03.283587 | orchestrator | '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283653 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.283682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283795 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.283820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283854 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.283865 | orchestrator | 2025-07-06 20:15:03.283876 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-07-06 20:15:03.283893 | orchestrator | Sunday 06 July 2025 20:09:44 +0000 (0:00:01.307) 0:00:52.609 *********** 2025-07-06 20:15:03.283914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.283953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.283975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.283987 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.283998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.284009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.284020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.284031 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.284042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.284061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.284085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.284096 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.284107 | orchestrator | 2025-07-06 20:15:03.284118 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-07-06 20:15:03.284129 | orchestrator | Sunday 06 July 2025 20:09:45 +0000 (0:00:00.876) 0:00:53.485 *********** 2025-07-06 20:15:03.284141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.284152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.284164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.284175 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.284186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.284198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.284233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.284269 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.284291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.284303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.284314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.284325 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.284336 | orchestrator | 2025-07-06 20:15:03.284347 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-07-06 20:15:03.284358 | orchestrator | Sunday 06 July 2025 20:09:47 +0000 (0:00:01.476) 0:00:54.962 *********** 2025-07-06 20:15:03.284369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.284381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.284410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.284427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.284439 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.284450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.284461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.284472 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.284483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-06 20:15:03.284494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-06 20:15:03.284506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-06 20:15:03.284524 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.284549 | orchestrator | 2025-07-06 20:15:03.284561 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-07-06 20:15:03.284582 | orchestrator | Sunday 06 July 2025 20:09:48 +0000 (0:00:01.504) 0:00:56.467 *********** 2025-07-06 20:15:03.284593 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-06 20:15:03.284611 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-06 20:15:03.284623 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-06 20:15:03.284634 | orchestrator | 2025-07-06 20:15:03.284645 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-07-06 20:15:03.284656 | orchestrator | Sunday 06 July 2025 20:09:50 +0000 (0:00:01.505) 0:00:57.972 *********** 2025-07-06 20:15:03.284667 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-06 20:15:03.284684 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-06 20:15:03.284696 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-06 20:15:03.284707 | orchestrator | 2025-07-06 20:15:03.284718 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-07-06 20:15:03.284729 | orchestrator | Sunday 06 July 2025 20:09:51 +0000 (0:00:01.506) 0:00:59.478 *********** 2025-07-06 20:15:03.284740 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:15:03.284750 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:15:03.284761 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:15:03.284772 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:15:03.284783 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.284794 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:15:03.284805 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.284816 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:15:03.284827 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.284838 | orchestrator | 2025-07-06 20:15:03.284849 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-07-06 20:15:03.284860 | orchestrator | Sunday 06 July 2025 20:09:52 +0000 (0:00:01.035) 0:01:00.513 *********** 2025-07-06 20:15:03.284871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.284883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.284901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-06 20:15:03.284919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.284936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.284947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-06 20:15:03.284959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.284971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.284991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-06 20:15:03.285002 | orchestrator | 2025-07-06 20:15:03.285013 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-07-06 20:15:03.285025 | orchestrator | Sunday 06 July 2025 20:09:55 +0000 (0:00:02.602) 0:01:03.116 *********** 2025-07-06 20:15:03.285036 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.285047 | orchestrator | 2025-07-06 20:15:03.285058 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-07-06 20:15:03.285069 | orchestrator | Sunday 06 July 2025 20:09:55 +0000 (0:00:00.788) 0:01:03.905 *********** 2025-07-06 20:15:03.285087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-06 20:15:03.285109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.285122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-06 20:15:03.285163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.285175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-06 20:15:03.285223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.285496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285555 | orchestrator | 2025-07-06 20:15:03.285568 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-07-06 20:15:03.285579 | orchestrator | Sunday 06 July 2025 20:10:00 +0000 (0:00:04.543) 0:01:08.448 *********** 2025-07-06 20:15:03.285590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-06 20:15:03.285602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.285620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285649 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.285671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-06 20:15:03.285691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.285718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285760 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.285787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-06 20:15:03.285806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.285836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.285887 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.285906 | orchestrator | 2025-07-06 20:15:03.285918 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-07-06 20:15:03.285929 | orchestrator | Sunday 06 July 2025 20:10:01 +0000 (0:00:01.084) 0:01:09.532 *********** 2025-07-06 20:15:03.285940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:15:03.285952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:15:03.285964 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.285975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:15:03.285986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:15:03.285997 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.286008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:15:03.286056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-06 20:15:03.286069 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.286082 | orchestrator | 2025-07-06 20:15:03.286095 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-07-06 20:15:03.286108 | orchestrator | Sunday 06 July 2025 20:10:02 +0000 (0:00:01.105) 0:01:10.637 *********** 2025-07-06 20:15:03.286121 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.286133 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.286146 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.286158 | orchestrator | 2025-07-06 20:15:03.286170 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-07-06 20:15:03.286276 | orchestrator | Sunday 06 July 2025 20:10:04 +0000 (0:00:01.686) 0:01:12.324 *********** 2025-07-06 20:15:03.286301 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.286324 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.286336 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.286350 | orchestrator | 2025-07-06 20:15:03.286363 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-07-06 20:15:03.286377 | orchestrator | Sunday 06 July 2025 20:10:06 +0000 (0:00:02.121) 0:01:14.446 *********** 2025-07-06 20:15:03.286389 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.286403 | orchestrator | 2025-07-06 20:15:03.286415 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-07-06 20:15:03.286427 | orchestrator | Sunday 06 July 2025 20:10:07 +0000 (0:00:00.731) 0:01:15.178 *********** 2025-07-06 20:15:03.286440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.286474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.286517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.286547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286589 | orchestrator | 2025-07-06 20:15:03.286600 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-07-06 20:15:03.286611 | orchestrator | Sunday 06 July 2025 20:10:11 +0000 (0:00:03.799) 0:01:18.977 *********** 2025-07-06 20:15:03.286623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.286646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286695 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.286713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.286725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286755 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.286771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.286783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.286805 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.286816 | orchestrator | 2025-07-06 20:15:03.286827 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-07-06 20:15:03.286838 | orchestrator | Sunday 06 July 2025 20:10:11 +0000 (0:00:00.922) 0:01:19.900 *********** 2025-07-06 20:15:03.286854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:15:03.286867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:15:03.286880 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.286891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:15:03.286902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:15:03.286913 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.286924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:15:03.286935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-06 20:15:03.286953 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.286964 | orchestrator | 2025-07-06 20:15:03.286975 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-07-06 20:15:03.286986 | orchestrator | Sunday 06 July 2025 20:10:13 +0000 (0:00:01.851) 0:01:21.751 *********** 2025-07-06 20:15:03.286997 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.287008 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.287018 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.287029 | orchestrator | 2025-07-06 20:15:03.287040 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-07-06 20:15:03.287051 | orchestrator | Sunday 06 July 2025 20:10:15 +0000 (0:00:01.289) 0:01:23.041 *********** 2025-07-06 20:15:03.287062 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.287072 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.287083 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.287094 | orchestrator | 2025-07-06 20:15:03.287105 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-07-06 20:15:03.287115 | orchestrator | Sunday 06 July 2025 20:10:17 +0000 (0:00:02.004) 0:01:25.045 *********** 2025-07-06 20:15:03.287126 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.287137 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.287148 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.287158 | orchestrator | 2025-07-06 20:15:03.287173 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-07-06 20:15:03.287184 | orchestrator | Sunday 06 July 2025 20:10:17 +0000 (0:00:00.461) 0:01:25.507 *********** 2025-07-06 20:15:03.287195 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.287206 | orchestrator | 2025-07-06 20:15:03.287217 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-07-06 20:15:03.287227 | orchestrator | Sunday 06 July 2025 20:10:18 +0000 (0:00:00.632) 0:01:26.139 *********** 2025-07-06 20:15:03.287239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-06 20:15:03.287318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-06 20:15:03.287332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-06 20:15:03.287352 | orchestrator | 2025-07-06 20:15:03.287363 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-07-06 20:15:03.287374 | orchestrator | Sunday 06 July 2025 20:10:20 +0000 (0:00:02.485) 0:01:28.625 *********** 2025-07-06 20:15:03.287385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-06 20:15:03.287396 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.287414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-06 20:15:03.287425 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.287436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-06 20:15:03.287448 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.287458 | orchestrator | 2025-07-06 20:15:03.287469 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-07-06 20:15:03.287480 | orchestrator | Sunday 06 July 2025 20:10:22 +0000 (0:00:01.832) 0:01:30.457 *********** 2025-07-06 20:15:03.287498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:15:03.287516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:15:03.287528 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.287540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:15:03.287551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:15:03.287562 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.287573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:15:03.287589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-06 20:15:03.287599 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.287609 | orchestrator | 2025-07-06 20:15:03.287618 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-07-06 20:15:03.287628 | orchestrator | Sunday 06 July 2025 20:10:24 +0000 (0:00:01.633) 0:01:32.091 *********** 2025-07-06 20:15:03.287637 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.287647 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.287656 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.287666 | orchestrator | 2025-07-06 20:15:03.287675 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-07-06 20:15:03.287685 | orchestrator | Sunday 06 July 2025 20:10:24 +0000 (0:00:00.422) 0:01:32.514 *********** 2025-07-06 20:15:03.287695 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.287705 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.287714 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.287724 | orchestrator | 2025-07-06 20:15:03.287736 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-07-06 20:15:03.287753 | orchestrator | Sunday 06 July 2025 20:10:25 +0000 (0:00:01.288) 0:01:33.802 *********** 2025-07-06 20:15:03.287770 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.287800 | orchestrator | 2025-07-06 20:15:03.287819 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-07-06 20:15:03.287838 | orchestrator | Sunday 06 July 2025 20:10:26 +0000 (0:00:00.936) 0:01:34.739 *********** 2025-07-06 20:15:03.287865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.287884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.287901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.287919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.287930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.287948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.287964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.287975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.287986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.288001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288043 | orchestrator | 2025-07-06 20:15:03.288053 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-07-06 20:15:03.288063 | orchestrator | Sunday 06 July 2025 20:10:30 +0000 (0:00:03.393) 0:01:38.133 *********** 2025-07-06 20:15:03.288073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.288084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288125 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.288142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.288152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288187 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.288203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.288213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288278 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.288288 | orchestrator | 2025-07-06 20:15:03.288299 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-07-06 20:15:03.288308 | orchestrator | Sunday 06 July 2025 20:10:31 +0000 (0:00:01.119) 0:01:39.252 *********** 2025-07-06 20:15:03.288318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:15:03.288328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:15:03.288339 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.288353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:15:03.288370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:15:03.288380 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.288390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:15:03.288399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-06 20:15:03.288409 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.288419 | orchestrator | 2025-07-06 20:15:03.288428 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-07-06 20:15:03.288438 | orchestrator | Sunday 06 July 2025 20:10:32 +0000 (0:00:01.035) 0:01:40.288 *********** 2025-07-06 20:15:03.288448 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.288457 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.288467 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.288476 | orchestrator | 2025-07-06 20:15:03.288486 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-07-06 20:15:03.288495 | orchestrator | Sunday 06 July 2025 20:10:33 +0000 (0:00:01.255) 0:01:41.543 *********** 2025-07-06 20:15:03.288505 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.288514 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.288524 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.288533 | orchestrator | 2025-07-06 20:15:03.288543 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-07-06 20:15:03.288552 | orchestrator | Sunday 06 July 2025 20:10:35 +0000 (0:00:02.044) 0:01:43.588 *********** 2025-07-06 20:15:03.288562 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.288571 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.288581 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.288590 | orchestrator | 2025-07-06 20:15:03.288600 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-07-06 20:15:03.288609 | orchestrator | Sunday 06 July 2025 20:10:35 +0000 (0:00:00.308) 0:01:43.897 *********** 2025-07-06 20:15:03.288624 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.288634 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.288644 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.288653 | orchestrator | 2025-07-06 20:15:03.288663 | orchestrator | TASK [include_role : designate] ************************************************ 2025-07-06 20:15:03.288673 | orchestrator | Sunday 06 July 2025 20:10:36 +0000 (0:00:00.540) 0:01:44.437 *********** 2025-07-06 20:15:03.288683 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.288692 | orchestrator | 2025-07-06 20:15:03.288702 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-07-06 20:15:03.288712 | orchestrator | Sunday 06 July 2025 20:10:37 +0000 (0:00:00.767) 0:01:45.204 *********** 2025-07-06 20:15:03.288722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:15:03.288738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:15:03.288753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:15:03.288854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:15:03.288870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:15:03.288881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:15:03.288898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.288990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289026 | orchestrator | 2025-07-06 20:15:03.289036 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-07-06 20:15:03.289046 | orchestrator | Sunday 06 July 2025 20:10:41 +0000 (0:00:03.947) 0:01:49.151 *********** 2025-07-06 20:15:03.289061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:15:03.289071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:15:03.289087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:15:03.289098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:15:03.289114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289460 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.289476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:15:03.289487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289497 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.289581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:15:03.289606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.289670 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.289680 | orchestrator | 2025-07-06 20:15:03.289690 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-07-06 20:15:03.289700 | orchestrator | Sunday 06 July 2025 20:10:42 +0000 (0:00:00.897) 0:01:50.049 *********** 2025-07-06 20:15:03.289711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:15:03.289721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:15:03.289731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:15:03.289747 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.289757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:15:03.289767 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.289843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:15:03.289863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-06 20:15:03.289879 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.289896 | orchestrator | 2025-07-06 20:15:03.289913 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-07-06 20:15:03.289928 | orchestrator | Sunday 06 July 2025 20:10:43 +0000 (0:00:01.187) 0:01:51.237 *********** 2025-07-06 20:15:03.289946 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.289962 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.289974 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.289984 | orchestrator | 2025-07-06 20:15:03.289994 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-07-06 20:15:03.290004 | orchestrator | Sunday 06 July 2025 20:10:45 +0000 (0:00:01.806) 0:01:53.044 *********** 2025-07-06 20:15:03.290054 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.290067 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.290076 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.290086 | orchestrator | 2025-07-06 20:15:03.290096 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-07-06 20:15:03.290105 | orchestrator | Sunday 06 July 2025 20:10:47 +0000 (0:00:02.164) 0:01:55.208 *********** 2025-07-06 20:15:03.290115 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.290125 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.290134 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.290144 | orchestrator | 2025-07-06 20:15:03.290154 | orchestrator | TASK [include_role : glance] *************************************************** 2025-07-06 20:15:03.290164 | orchestrator | Sunday 06 July 2025 20:10:47 +0000 (0:00:00.346) 0:01:55.554 *********** 2025-07-06 20:15:03.290173 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.290183 | orchestrator | 2025-07-06 20:15:03.290193 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-07-06 20:15:03.290202 | orchestrator | Sunday 06 July 2025 20:10:48 +0000 (0:00:00.827) 0:01:56.381 *********** 2025-07-06 20:15:03.290215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:15:03.290442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.290480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:15:03.290561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.290580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:15:03.290647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.290673 | orchestrator | 2025-07-06 20:15:03.290682 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-07-06 20:15:03.290691 | orchestrator | Sunday 06 July 2025 20:10:52 +0000 (0:00:04.541) 0:02:00.923 *********** 2025-07-06 20:15:03.290700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:15:03.290713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.290729 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.290792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:15:03.290809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.290832 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.290915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:15:03.290943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.290969 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.290981 | orchestrator | 2025-07-06 20:15:03.290990 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-07-06 20:15:03.290998 | orchestrator | Sunday 06 July 2025 20:10:55 +0000 (0:00:02.883) 0:02:03.807 *********** 2025-07-06 20:15:03.291006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:15:03.291075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:15:03.291089 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.291103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:15:03.291112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:15:03.291121 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.291129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:15:03.291137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-06 20:15:03.291157 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.291169 | orchestrator | 2025-07-06 20:15:03.291182 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-07-06 20:15:03.291195 | orchestrator | Sunday 06 July 2025 20:10:59 +0000 (0:00:03.272) 0:02:07.080 *********** 2025-07-06 20:15:03.291206 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.291218 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.291231 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.291261 | orchestrator | 2025-07-06 20:15:03.291274 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-07-06 20:15:03.291300 | orchestrator | Sunday 06 July 2025 20:11:00 +0000 (0:00:01.818) 0:02:08.898 *********** 2025-07-06 20:15:03.291312 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.291326 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.291339 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.291352 | orchestrator | 2025-07-06 20:15:03.291367 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-07-06 20:15:03.291380 | orchestrator | Sunday 06 July 2025 20:11:03 +0000 (0:00:02.038) 0:02:10.936 *********** 2025-07-06 20:15:03.291393 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.291406 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.291418 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.291430 | orchestrator | 2025-07-06 20:15:03.291444 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-07-06 20:15:03.291457 | orchestrator | Sunday 06 July 2025 20:11:03 +0000 (0:00:00.298) 0:02:11.235 *********** 2025-07-06 20:15:03.291470 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.291484 | orchestrator | 2025-07-06 20:15:03.291497 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-07-06 20:15:03.291509 | orchestrator | Sunday 06 July 2025 20:11:04 +0000 (0:00:00.859) 0:02:12.094 *********** 2025-07-06 20:15:03.291591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:15:03.291605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:15:03.291614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:15:03.291635 | orchestrator | 2025-07-06 20:15:03.291647 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-07-06 20:15:03.291655 | orchestrator | Sunday 06 July 2025 20:11:07 +0000 (0:00:03.437) 0:02:15.531 *********** 2025-07-06 20:15:03.291675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:15:03.291684 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.291692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:15:03.291700 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.291709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:15:03.291717 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.291725 | orchestrator | 2025-07-06 20:15:03.291733 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-07-06 20:15:03.291740 | orchestrator | Sunday 06 July 2025 20:11:07 +0000 (0:00:00.388) 0:02:15.920 *********** 2025-07-06 20:15:03.291803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:15:03.291817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:15:03.291831 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.291840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:15:03.291848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:15:03.291863 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.291871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:15:03.291879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-06 20:15:03.291886 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.291894 | orchestrator | 2025-07-06 20:15:03.291902 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-07-06 20:15:03.291913 | orchestrator | Sunday 06 July 2025 20:11:08 +0000 (0:00:00.630) 0:02:16.551 *********** 2025-07-06 20:15:03.291927 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.291939 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.291951 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.291966 | orchestrator | 2025-07-06 20:15:03.291980 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-07-06 20:15:03.291993 | orchestrator | Sunday 06 July 2025 20:11:10 +0000 (0:00:01.501) 0:02:18.052 *********** 2025-07-06 20:15:03.292006 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.292014 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.292022 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.292030 | orchestrator | 2025-07-06 20:15:03.292038 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-07-06 20:15:03.292046 | orchestrator | Sunday 06 July 2025 20:11:12 +0000 (0:00:01.977) 0:02:20.030 *********** 2025-07-06 20:15:03.292054 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.292061 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.292069 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.292077 | orchestrator | 2025-07-06 20:15:03.292085 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-07-06 20:15:03.292093 | orchestrator | Sunday 06 July 2025 20:11:12 +0000 (0:00:00.315) 0:02:20.345 *********** 2025-07-06 20:15:03.292101 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.292109 | orchestrator | 2025-07-06 20:15:03.292122 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-07-06 20:15:03.292130 | orchestrator | Sunday 06 July 2025 20:11:13 +0000 (0:00:00.858) 0:02:21.204 *********** 2025-07-06 20:15:03.292200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:03.292228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:03.292321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:15:03.292348 | orchestrator | 2025-07-06 20:15:03.292357 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-07-06 20:15:03.292365 | orchestrator | Sunday 06 July 2025 20:11:17 +0000 (0:00:03.784) 0:02:24.988 *********** 2025-07-06 20:15:03.292379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:15:03.292388 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.292450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:15:03.292472 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.292488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:15:03.292497 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.292505 | orchestrator | 2025-07-06 20:15:03.292513 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-07-06 20:15:03.292526 | orchestrator | Sunday 06 July 2025 20:11:17 +0000 (0:00:00.799) 0:02:25.788 *********** 2025-07-06 20:15:03.292535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:15:03.292596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:15:03.292612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:15:03.292625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:15:03.292635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:15:03.292643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:15:03.292651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-06 20:15:03.292659 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.292667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:15:03.292675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:15:03.292688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-06 20:15:03.292696 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.292704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:15:03.292712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:15:03.292726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-06 20:15:03.292734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-06 20:15:03.292742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-06 20:15:03.292750 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.292758 | orchestrator | 2025-07-06 20:15:03.292817 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-07-06 20:15:03.292829 | orchestrator | Sunday 06 July 2025 20:11:18 +0000 (0:00:01.120) 0:02:26.909 *********** 2025-07-06 20:15:03.292837 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.292845 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.292854 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.292867 | orchestrator | 2025-07-06 20:15:03.292876 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-07-06 20:15:03.292884 | orchestrator | Sunday 06 July 2025 20:11:20 +0000 (0:00:01.602) 0:02:28.511 *********** 2025-07-06 20:15:03.292891 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.292899 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.292907 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.292915 | orchestrator | 2025-07-06 20:15:03.292922 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-07-06 20:15:03.292930 | orchestrator | Sunday 06 July 2025 20:11:22 +0000 (0:00:02.125) 0:02:30.636 *********** 2025-07-06 20:15:03.292938 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.292948 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.292961 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.292974 | orchestrator | 2025-07-06 20:15:03.292988 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-07-06 20:15:03.293002 | orchestrator | Sunday 06 July 2025 20:11:23 +0000 (0:00:00.322) 0:02:30.959 *********** 2025-07-06 20:15:03.293015 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.293029 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.293037 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.293045 | orchestrator | 2025-07-06 20:15:03.293053 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-07-06 20:15:03.293061 | orchestrator | Sunday 06 July 2025 20:11:23 +0000 (0:00:00.310) 0:02:31.270 *********** 2025-07-06 20:15:03.293069 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.293077 | orchestrator | 2025-07-06 20:15:03.293084 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-07-06 20:15:03.293092 | orchestrator | Sunday 06 July 2025 20:11:24 +0000 (0:00:01.134) 0:02:32.405 *********** 2025-07-06 20:15:03.293106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:15:03.293123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:15:03.293132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:15:03.293199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:15:03.293212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:15:03.293220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:15:03.293242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:15:03.293305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:15:03.293314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:15:03.293323 | orchestrator | 2025-07-06 20:15:03.293388 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-07-06 20:15:03.293400 | orchestrator | Sunday 06 July 2025 20:11:28 +0000 (0:00:04.359) 0:02:36.765 *********** 2025-07-06 20:15:03.293409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:15:03.293418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:15:03.293445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:15:03.293454 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.293463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:15:03.293472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:15:03.293536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:15:03.293549 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.293558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:15:03.293573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:15:03.293586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:15:03.293601 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.293611 | orchestrator | 2025-07-06 20:15:03.293619 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-07-06 20:15:03.293627 | orchestrator | Sunday 06 July 2025 20:11:29 +0000 (0:00:00.688) 0:02:37.453 *********** 2025-07-06 20:15:03.293636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:15:03.293646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:15:03.293654 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.293662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:15:03.293723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:15:03.293735 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.293747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:15:03.293759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-06 20:15:03.293767 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.293774 | orchestrator | 2025-07-06 20:15:03.293782 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-07-06 20:15:03.293790 | orchestrator | Sunday 06 July 2025 20:11:30 +0000 (0:00:01.180) 0:02:38.634 *********** 2025-07-06 20:15:03.293798 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.293812 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.293820 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.293827 | orchestrator | 2025-07-06 20:15:03.293833 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-07-06 20:15:03.293840 | orchestrator | Sunday 06 July 2025 20:11:32 +0000 (0:00:01.722) 0:02:40.356 *********** 2025-07-06 20:15:03.293847 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.293854 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.293860 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.293867 | orchestrator | 2025-07-06 20:15:03.293873 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-07-06 20:15:03.293880 | orchestrator | Sunday 06 July 2025 20:11:34 +0000 (0:00:01.917) 0:02:42.274 *********** 2025-07-06 20:15:03.293887 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.293896 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.293907 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.293918 | orchestrator | 2025-07-06 20:15:03.293928 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-07-06 20:15:03.293939 | orchestrator | Sunday 06 July 2025 20:11:34 +0000 (0:00:00.290) 0:02:42.565 *********** 2025-07-06 20:15:03.293950 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.293961 | orchestrator | 2025-07-06 20:15:03.293971 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-07-06 20:15:03.293982 | orchestrator | Sunday 06 July 2025 20:11:35 +0000 (0:00:01.216) 0:02:43.781 *********** 2025-07-06 20:15:03.293999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:15:03.294039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:15:03.294123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:15:03.294143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294155 | orchestrator | 2025-07-06 20:15:03.294162 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-07-06 20:15:03.294169 | orchestrator | Sunday 06 July 2025 20:11:39 +0000 (0:00:03.487) 0:02:47.268 *********** 2025-07-06 20:15:03.294176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:15:03.294229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294264 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.294273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:15:03.294285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294293 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.294304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:15:03.294312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294319 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.294331 | orchestrator | 2025-07-06 20:15:03.294338 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-07-06 20:15:03.294345 | orchestrator | Sunday 06 July 2025 20:11:39 +0000 (0:00:00.618) 0:02:47.886 *********** 2025-07-06 20:15:03.294396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:15:03.294407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:15:03.294417 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.294427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:15:03.294434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:15:03.294441 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.294447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:15:03.294454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-06 20:15:03.294461 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.294467 | orchestrator | 2025-07-06 20:15:03.294474 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-07-06 20:15:03.294481 | orchestrator | Sunday 06 July 2025 20:11:41 +0000 (0:00:01.388) 0:02:49.274 *********** 2025-07-06 20:15:03.294488 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.294494 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.294501 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.294507 | orchestrator | 2025-07-06 20:15:03.294514 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-07-06 20:15:03.294521 | orchestrator | Sunday 06 July 2025 20:11:42 +0000 (0:00:01.296) 0:02:50.571 *********** 2025-07-06 20:15:03.294527 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.294534 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.294540 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.294547 | orchestrator | 2025-07-06 20:15:03.294553 | orchestrator | TASK [include_role : manila] *************************************************** 2025-07-06 20:15:03.294560 | orchestrator | Sunday 06 July 2025 20:11:44 +0000 (0:00:01.939) 0:02:52.510 *********** 2025-07-06 20:15:03.294566 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.294573 | orchestrator | 2025-07-06 20:15:03.294580 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-07-06 20:15:03.294586 | orchestrator | Sunday 06 July 2025 20:11:45 +0000 (0:00:01.056) 0:02:53.566 *********** 2025-07-06 20:15:03.294598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-06 20:15:03.294612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-06 20:15:03.294696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-06 20:15:03.294798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294822 | orchestrator | 2025-07-06 20:15:03.294829 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-07-06 20:15:03.294836 | orchestrator | Sunday 06 July 2025 20:11:49 +0000 (0:00:04.154) 0:02:57.720 *********** 2025-07-06 20:15:03.294843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-06 20:15:03.294856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294930 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.294953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-06 20:15:03.294963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.294992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.295001 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.295081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-06 20:15:03.295100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.295110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.295117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.295131 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.295138 | orchestrator | 2025-07-06 20:15:03.295145 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-07-06 20:15:03.295156 | orchestrator | Sunday 06 July 2025 20:11:50 +0000 (0:00:00.917) 0:02:58.638 *********** 2025-07-06 20:15:03.295163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:15:03.295170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:15:03.295177 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.295184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:15:03.295191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:15:03.295198 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.295204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:15:03.295211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-06 20:15:03.295218 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.295224 | orchestrator | 2025-07-06 20:15:03.295231 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-07-06 20:15:03.295238 | orchestrator | Sunday 06 July 2025 20:11:51 +0000 (0:00:01.047) 0:02:59.686 *********** 2025-07-06 20:15:03.295289 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.295304 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.295315 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.295325 | orchestrator | 2025-07-06 20:15:03.295335 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-07-06 20:15:03.295347 | orchestrator | Sunday 06 July 2025 20:11:53 +0000 (0:00:01.647) 0:03:01.334 *********** 2025-07-06 20:15:03.295426 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.295437 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.295449 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.295457 | orchestrator | 2025-07-06 20:15:03.295464 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-07-06 20:15:03.295471 | orchestrator | Sunday 06 July 2025 20:11:55 +0000 (0:00:02.154) 0:03:03.488 *********** 2025-07-06 20:15:03.295478 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.295485 | orchestrator | 2025-07-06 20:15:03.295492 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-07-06 20:15:03.295498 | orchestrator | Sunday 06 July 2025 20:11:56 +0000 (0:00:01.070) 0:03:04.558 *********** 2025-07-06 20:15:03.295505 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-06 20:15:03.295512 | orchestrator | 2025-07-06 20:15:03.295519 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-07-06 20:15:03.295525 | orchestrator | Sunday 06 July 2025 20:11:59 +0000 (0:00:03.073) 0:03:07.632 *********** 2025-07-06 20:15:03.295538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:15:03.295555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:15:03.295562 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.295616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:15:03.295634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:15:03.295647 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.295659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:15:03.295667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:15:03.295674 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.295681 | orchestrator | 2025-07-06 20:15:03.295688 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-07-06 20:15:03.295759 | orchestrator | Sunday 06 July 2025 20:12:02 +0000 (0:00:02.496) 0:03:10.129 *********** 2025-07-06 20:15:03.295776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:15:03.295802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:15:03.295813 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.295886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:15:03.295899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:15:03.295913 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.295929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:15:03.295936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-06 20:15:03.295942 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.295949 | orchestrator | 2025-07-06 20:15:03.295955 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-07-06 20:15:03.295961 | orchestrator | Sunday 06 July 2025 20:12:04 +0000 (0:00:02.251) 0:03:12.380 *********** 2025-07-06 20:15:03.296013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:15:03.296023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:15:03.296046 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.296056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:15:03.296067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:15:03.296078 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.296096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:15:03.296108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-06 20:15:03.296119 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.296126 | orchestrator | 2025-07-06 20:15:03.296132 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-07-06 20:15:03.296138 | orchestrator | Sunday 06 July 2025 20:12:06 +0000 (0:00:02.542) 0:03:14.923 *********** 2025-07-06 20:15:03.296145 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.296151 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.296157 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.296163 | orchestrator | 2025-07-06 20:15:03.296170 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-07-06 20:15:03.296176 | orchestrator | Sunday 06 July 2025 20:12:09 +0000 (0:00:02.062) 0:03:16.985 *********** 2025-07-06 20:15:03.296182 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.296188 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.296194 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.296200 | orchestrator | 2025-07-06 20:15:03.296207 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-07-06 20:15:03.296219 | orchestrator | Sunday 06 July 2025 20:12:10 +0000 (0:00:01.438) 0:03:18.423 *********** 2025-07-06 20:15:03.296225 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.296231 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.296237 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.296243 | orchestrator | 2025-07-06 20:15:03.296275 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-07-06 20:15:03.296281 | orchestrator | Sunday 06 July 2025 20:12:10 +0000 (0:00:00.303) 0:03:18.727 *********** 2025-07-06 20:15:03.296339 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.296349 | orchestrator | 2025-07-06 20:15:03.296355 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-07-06 20:15:03.296363 | orchestrator | Sunday 06 July 2025 20:12:11 +0000 (0:00:01.082) 0:03:19.809 *********** 2025-07-06 20:15:03.296375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-06 20:15:03.296383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-06 20:15:03.296394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-06 20:15:03.296400 | orchestrator | 2025-07-06 20:15:03.296407 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-07-06 20:15:03.296413 | orchestrator | Sunday 06 July 2025 20:12:13 +0000 (0:00:01.823) 0:03:21.632 *********** 2025-07-06 20:15:03.296420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-06 20:15:03.296474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-06 20:15:03.296484 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.296490 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.296499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-06 20:15:03.296510 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.296516 | orchestrator | 2025-07-06 20:15:03.296523 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-07-06 20:15:03.296529 | orchestrator | Sunday 06 July 2025 20:12:14 +0000 (0:00:00.417) 0:03:22.050 *********** 2025-07-06 20:15:03.296536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-06 20:15:03.296544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-06 20:15:03.296550 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.296556 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.296562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-06 20:15:03.296569 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.296575 | orchestrator | 2025-07-06 20:15:03.296581 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-07-06 20:15:03.296587 | orchestrator | Sunday 06 July 2025 20:12:14 +0000 (0:00:00.577) 0:03:22.628 *********** 2025-07-06 20:15:03.296593 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.296599 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.296609 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.296616 | orchestrator | 2025-07-06 20:15:03.296622 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-07-06 20:15:03.296628 | orchestrator | Sunday 06 July 2025 20:12:15 +0000 (0:00:00.743) 0:03:23.371 *********** 2025-07-06 20:15:03.296635 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.296646 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.296652 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.296658 | orchestrator | 2025-07-06 20:15:03.296664 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-07-06 20:15:03.296670 | orchestrator | Sunday 06 July 2025 20:12:16 +0000 (0:00:01.247) 0:03:24.618 *********** 2025-07-06 20:15:03.296677 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.296683 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.296689 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.296695 | orchestrator | 2025-07-06 20:15:03.296701 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-07-06 20:15:03.296707 | orchestrator | Sunday 06 July 2025 20:12:17 +0000 (0:00:00.330) 0:03:24.949 *********** 2025-07-06 20:15:03.296713 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.296720 | orchestrator | 2025-07-06 20:15:03.296726 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-07-06 20:15:03.296732 | orchestrator | Sunday 06 July 2025 20:12:18 +0000 (0:00:01.425) 0:03:26.374 *********** 2025-07-06 20:15:03.296782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:15:03.296792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.296801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.296817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.296832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:15:03.296839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.296889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.296899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.296911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.296920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.296936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.296943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:15:03.296949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:15:03.297000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.297048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.297128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:15:03.297163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.297263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:15:03.297274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:15:03.297427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.297438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.297462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.297544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.297634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.297643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297656 | orchestrator | 2025-07-06 20:15:03.297666 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-07-06 20:15:03.297673 | orchestrator | Sunday 06 July 2025 20:12:22 +0000 (0:00:04.213) 0:03:30.588 *********** 2025-07-06 20:15:03.297680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:15:03.297690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:15:03.297774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:15:03.297856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.297871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:15:03.297883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:15:03.297979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.297998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.298005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:15:03.298095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-06 20:15:03.298115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.298196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.298223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.298230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.298237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.298292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.298309 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.298375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.298423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.298438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:15:03.298544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-06 20:15:03.298555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.298565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-06 20:15:03.298573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.298629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-06 20:15:03.298637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.298643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:15:03.298651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.298666 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.298672 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.298677 | orchestrator | 2025-07-06 20:15:03.298683 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-07-06 20:15:03.298689 | orchestrator | Sunday 06 July 2025 20:12:24 +0000 (0:00:01.532) 0:03:32.120 *********** 2025-07-06 20:15:03.298694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:15:03.298700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:15:03.298706 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.298727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:15:03.298734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:15:03.298739 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.298745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:15:03.298750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-06 20:15:03.298756 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.298761 | orchestrator | 2025-07-06 20:15:03.298766 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-07-06 20:15:03.298772 | orchestrator | Sunday 06 July 2025 20:12:26 +0000 (0:00:02.133) 0:03:34.253 *********** 2025-07-06 20:15:03.298777 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.298783 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.298788 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.298794 | orchestrator | 2025-07-06 20:15:03.298799 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-07-06 20:15:03.298804 | orchestrator | Sunday 06 July 2025 20:12:27 +0000 (0:00:01.224) 0:03:35.478 *********** 2025-07-06 20:15:03.298810 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.298815 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.298820 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.298826 | orchestrator | 2025-07-06 20:15:03.298831 | orchestrator | TASK [include_role : placement] ************************************************ 2025-07-06 20:15:03.298837 | orchestrator | Sunday 06 July 2025 20:12:29 +0000 (0:00:01.948) 0:03:37.427 *********** 2025-07-06 20:15:03.298842 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.298848 | orchestrator | 2025-07-06 20:15:03.298854 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-07-06 20:15:03.298859 | orchestrator | Sunday 06 July 2025 20:12:30 +0000 (0:00:01.153) 0:03:38.580 *********** 2025-07-06 20:15:03.298868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.298879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.298900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.298907 | orchestrator | 2025-07-06 20:15:03.298913 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-07-06 20:15:03.298918 | orchestrator | Sunday 06 July 2025 20:12:34 +0000 (0:00:03.474) 0:03:42.054 *********** 2025-07-06 20:15:03.298924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.298929 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.298938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.298952 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.298957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.298963 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.298968 | orchestrator | 2025-07-06 20:15:03.298973 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-07-06 20:15:03.298979 | orchestrator | Sunday 06 July 2025 20:12:34 +0000 (0:00:00.520) 0:03:42.575 *********** 2025-07-06 20:15:03.298984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:15:03.298991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:15:03.298997 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.299018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299031 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.299038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299059 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.299069 | orchestrator | 2025-07-06 20:15:03.299079 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-07-06 20:15:03.299088 | orchestrator | Sunday 06 July 2025 20:12:35 +0000 (0:00:00.775) 0:03:43.351 *********** 2025-07-06 20:15:03.299098 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.299104 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.299109 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.299114 | orchestrator | 2025-07-06 20:15:03.299119 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-07-06 20:15:03.299130 | orchestrator | Sunday 06 July 2025 20:12:37 +0000 (0:00:01.706) 0:03:45.058 *********** 2025-07-06 20:15:03.299135 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.299141 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.299146 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.299151 | orchestrator | 2025-07-06 20:15:03.299157 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-07-06 20:15:03.299162 | orchestrator | Sunday 06 July 2025 20:12:39 +0000 (0:00:02.053) 0:03:47.111 *********** 2025-07-06 20:15:03.299168 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.299173 | orchestrator | 2025-07-06 20:15:03.299179 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-07-06 20:15:03.299184 | orchestrator | Sunday 06 July 2025 20:12:40 +0000 (0:00:01.249) 0:03:48.360 *********** 2025-07-06 20:15:03.299194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.299201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.299243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.299297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299315 | orchestrator | 2025-07-06 20:15:03.299323 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-07-06 20:15:03.299332 | orchestrator | Sunday 06 July 2025 20:12:44 +0000 (0:00:04.451) 0:03:52.812 *********** 2025-07-06 20:15:03.299346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.299357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299375 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.299401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.299413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299430 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.299444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.299454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.299503 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.299509 | orchestrator | 2025-07-06 20:15:03.299514 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-07-06 20:15:03.299520 | orchestrator | Sunday 06 July 2025 20:12:45 +0000 (0:00:01.006) 0:03:53.819 *********** 2025-07-06 20:15:03.299525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299548 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.299554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299584 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.299597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-06 20:15:03.299633 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.299638 | orchestrator | 2025-07-06 20:15:03.299644 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-07-06 20:15:03.299653 | orchestrator | Sunday 06 July 2025 20:12:46 +0000 (0:00:00.872) 0:03:54.691 *********** 2025-07-06 20:15:03.299661 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.299670 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.299679 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.299687 | orchestrator | 2025-07-06 20:15:03.299696 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-07-06 20:15:03.299714 | orchestrator | Sunday 06 July 2025 20:12:48 +0000 (0:00:01.698) 0:03:56.390 *********** 2025-07-06 20:15:03.299723 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.299731 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.299740 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.299745 | orchestrator | 2025-07-06 20:15:03.299751 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-07-06 20:15:03.299756 | orchestrator | Sunday 06 July 2025 20:12:50 +0000 (0:00:02.143) 0:03:58.533 *********** 2025-07-06 20:15:03.299762 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.299767 | orchestrator | 2025-07-06 20:15:03.299773 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-07-06 20:15:03.299798 | orchestrator | Sunday 06 July 2025 20:12:52 +0000 (0:00:01.554) 0:04:00.088 *********** 2025-07-06 20:15:03.299805 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-07-06 20:15:03.299811 | orchestrator | 2025-07-06 20:15:03.299816 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-07-06 20:15:03.299822 | orchestrator | Sunday 06 July 2025 20:12:53 +0000 (0:00:01.032) 0:04:01.120 *********** 2025-07-06 20:15:03.299828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-06 20:15:03.299834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-06 20:15:03.299840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-06 20:15:03.299845 | orchestrator | 2025-07-06 20:15:03.299851 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-07-06 20:15:03.299857 | orchestrator | Sunday 06 July 2025 20:12:57 +0000 (0:00:03.936) 0:04:05.056 *********** 2025-07-06 20:15:03.299865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:15:03.299871 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.299880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:15:03.299895 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.299905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:15:03.299914 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.299924 | orchestrator | 2025-07-06 20:15:03.299930 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-07-06 20:15:03.299935 | orchestrator | Sunday 06 July 2025 20:12:58 +0000 (0:00:01.264) 0:04:06.320 *********** 2025-07-06 20:15:03.299962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:15:03.299973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:15:03.299983 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.299994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:15:03.300003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:15:03.300014 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.300021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:15:03.300027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-06 20:15:03.300032 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.300038 | orchestrator | 2025-07-06 20:15:03.300043 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-06 20:15:03.300048 | orchestrator | Sunday 06 July 2025 20:13:00 +0000 (0:00:01.994) 0:04:08.315 *********** 2025-07-06 20:15:03.300054 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.300059 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.300065 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.300070 | orchestrator | 2025-07-06 20:15:03.300075 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-06 20:15:03.300081 | orchestrator | Sunday 06 July 2025 20:13:02 +0000 (0:00:02.317) 0:04:10.632 *********** 2025-07-06 20:15:03.300086 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.300092 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.300097 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.300103 | orchestrator | 2025-07-06 20:15:03.300108 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-07-06 20:15:03.300118 | orchestrator | Sunday 06 July 2025 20:13:06 +0000 (0:00:03.398) 0:04:14.030 *********** 2025-07-06 20:15:03.300125 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-07-06 20:15:03.300131 | orchestrator | 2025-07-06 20:15:03.300142 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-07-06 20:15:03.300147 | orchestrator | Sunday 06 July 2025 20:13:07 +0000 (0:00:00.914) 0:04:14.945 *********** 2025-07-06 20:15:03.300153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:15:03.300159 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.300164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:15:03.300170 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.300194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:15:03.300201 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.300206 | orchestrator | 2025-07-06 20:15:03.300212 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-07-06 20:15:03.300217 | orchestrator | Sunday 06 July 2025 20:13:08 +0000 (0:00:01.324) 0:04:16.270 *********** 2025-07-06 20:15:03.300223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:15:03.300229 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.300235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:15:03.300240 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.300301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-06 20:15:03.300313 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.300318 | orchestrator | 2025-07-06 20:15:03.300324 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-07-06 20:15:03.300330 | orchestrator | Sunday 06 July 2025 20:13:09 +0000 (0:00:01.657) 0:04:17.927 *********** 2025-07-06 20:15:03.300335 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.300341 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.300346 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.300352 | orchestrator | 2025-07-06 20:15:03.300360 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-06 20:15:03.300366 | orchestrator | Sunday 06 July 2025 20:13:11 +0000 (0:00:01.248) 0:04:19.175 *********** 2025-07-06 20:15:03.300371 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.300377 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.300383 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.300392 | orchestrator | 2025-07-06 20:15:03.300401 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-06 20:15:03.300410 | orchestrator | Sunday 06 July 2025 20:13:13 +0000 (0:00:02.596) 0:04:21.772 *********** 2025-07-06 20:15:03.300420 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.300429 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.300438 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.300444 | orchestrator | 2025-07-06 20:15:03.300449 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-07-06 20:15:03.300455 | orchestrator | Sunday 06 July 2025 20:13:16 +0000 (0:00:03.068) 0:04:24.840 *********** 2025-07-06 20:15:03.300464 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-07-06 20:15:03.300473 | orchestrator | 2025-07-06 20:15:03.300483 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-07-06 20:15:03.300491 | orchestrator | Sunday 06 July 2025 20:13:17 +0000 (0:00:01.052) 0:04:25.892 *********** 2025-07-06 20:15:03.300500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:15:03.300506 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.300532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:15:03.300539 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.300544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:15:03.300555 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.300560 | orchestrator | 2025-07-06 20:15:03.300566 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-07-06 20:15:03.300572 | orchestrator | Sunday 06 July 2025 20:13:18 +0000 (0:00:01.023) 0:04:26.915 *********** 2025-07-06 20:15:03.300577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:15:03.300583 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.300589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:15:03.300594 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.300603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-06 20:15:03.300609 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.300615 | orchestrator | 2025-07-06 20:15:03.300621 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-07-06 20:15:03.300626 | orchestrator | Sunday 06 July 2025 20:13:20 +0000 (0:00:01.300) 0:04:28.216 *********** 2025-07-06 20:15:03.300631 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.300637 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.300642 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.300648 | orchestrator | 2025-07-06 20:15:03.300653 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-06 20:15:03.300659 | orchestrator | Sunday 06 July 2025 20:13:22 +0000 (0:00:01.761) 0:04:29.977 *********** 2025-07-06 20:15:03.300664 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.300670 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.300675 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.300681 | orchestrator | 2025-07-06 20:15:03.300686 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-06 20:15:03.300692 | orchestrator | Sunday 06 July 2025 20:13:24 +0000 (0:00:02.476) 0:04:32.454 *********** 2025-07-06 20:15:03.300697 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.300702 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.300708 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.300713 | orchestrator | 2025-07-06 20:15:03.300719 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-07-06 20:15:03.300724 | orchestrator | Sunday 06 July 2025 20:13:27 +0000 (0:00:02.889) 0:04:35.343 *********** 2025-07-06 20:15:03.300730 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.300742 | orchestrator | 2025-07-06 20:15:03.300752 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-07-06 20:15:03.300762 | orchestrator | Sunday 06 July 2025 20:13:28 +0000 (0:00:01.212) 0:04:36.555 *********** 2025-07-06 20:15:03.300797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.300804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:15:03.300810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.300820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.300826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.300837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.300859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:15:03.300865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.300871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.300881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:15:03.300887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.300893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.300920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.300927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.300933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.300939 | orchestrator | 2025-07-06 20:15:03.300944 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-07-06 20:15:03.300950 | orchestrator | Sunday 06 July 2025 20:13:32 +0000 (0:00:03.393) 0:04:39.948 *********** 2025-07-06 20:15:03.300959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.300965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:15:03.300975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.300997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.301003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.301009 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.301015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.301021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:15:03.301029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.301039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.301045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.301066 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.301073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.301079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-06 20:15:03.301084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.301093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-06 20:15:03.301105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:15:03.301111 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.301117 | orchestrator | 2025-07-06 20:15:03.301122 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-07-06 20:15:03.301128 | orchestrator | Sunday 06 July 2025 20:13:32 +0000 (0:00:00.732) 0:04:40.681 *********** 2025-07-06 20:15:03.301134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:15:03.301140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:15:03.301145 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.301167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:15:03.301173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:15:03.301179 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.301184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:15:03.301190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-06 20:15:03.301195 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.301200 | orchestrator | 2025-07-06 20:15:03.301206 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-07-06 20:15:03.301211 | orchestrator | Sunday 06 July 2025 20:13:33 +0000 (0:00:00.915) 0:04:41.597 *********** 2025-07-06 20:15:03.301217 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.301222 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.301227 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.301233 | orchestrator | 2025-07-06 20:15:03.301238 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-07-06 20:15:03.301258 | orchestrator | Sunday 06 July 2025 20:13:35 +0000 (0:00:01.788) 0:04:43.386 *********** 2025-07-06 20:15:03.301263 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.301269 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.301274 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.301280 | orchestrator | 2025-07-06 20:15:03.301285 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-07-06 20:15:03.301290 | orchestrator | Sunday 06 July 2025 20:13:37 +0000 (0:00:02.095) 0:04:45.482 *********** 2025-07-06 20:15:03.301296 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.301301 | orchestrator | 2025-07-06 20:15:03.301307 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-07-06 20:15:03.301312 | orchestrator | Sunday 06 July 2025 20:13:38 +0000 (0:00:01.356) 0:04:46.838 *********** 2025-07-06 20:15:03.301326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:15:03.301333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:15:03.301356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:15:03.301363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:15:03.301370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:15:03.301384 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:15:03.301390 | orchestrator | 2025-07-06 20:15:03.301396 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-07-06 20:15:03.301402 | orchestrator | Sunday 06 July 2025 20:13:44 +0000 (0:00:05.309) 0:04:52.147 *********** 2025-07-06 20:15:03.301423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:15:03.301431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:15:03.301436 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.301446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:15:03.301455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:15:03.301461 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.301483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:15:03.301490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:15:03.301496 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.301502 | orchestrator | 2025-07-06 20:15:03.301507 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-07-06 20:15:03.301517 | orchestrator | Sunday 06 July 2025 20:13:45 +0000 (0:00:01.028) 0:04:53.176 *********** 2025-07-06 20:15:03.301523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-06 20:15:03.301528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:15:03.301534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:15:03.301540 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.301545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-06 20:15:03.301553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:15:03.301559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:15:03.301565 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.301570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-06 20:15:03.301576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:15:03.301582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-06 20:15:03.301587 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.301592 | orchestrator | 2025-07-06 20:15:03.301598 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-07-06 20:15:03.301603 | orchestrator | Sunday 06 July 2025 20:13:46 +0000 (0:00:00.891) 0:04:54.068 *********** 2025-07-06 20:15:03.301609 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.301615 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.301620 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.301625 | orchestrator | 2025-07-06 20:15:03.301631 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-07-06 20:15:03.301636 | orchestrator | Sunday 06 July 2025 20:13:46 +0000 (0:00:00.429) 0:04:54.497 *********** 2025-07-06 20:15:03.301641 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.301647 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.301652 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.301657 | orchestrator | 2025-07-06 20:15:03.301678 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-07-06 20:15:03.301685 | orchestrator | Sunday 06 July 2025 20:13:47 +0000 (0:00:01.360) 0:04:55.858 *********** 2025-07-06 20:15:03.301691 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.301697 | orchestrator | 2025-07-06 20:15:03.301702 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-07-06 20:15:03.301711 | orchestrator | Sunday 06 July 2025 20:13:49 +0000 (0:00:01.648) 0:04:57.506 *********** 2025-07-06 20:15:03.301717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:15:03.301723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:15:03.301729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:15:03.301744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:15:03.301771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.301782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.301802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:15:03.301808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:15:03.301814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.301850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:15:03.301860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:15:03.301866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:15:03.301875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:15:03.301886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.301917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.301928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:15:03.301939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:15:03.301945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.301959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.301964 | orchestrator | 2025-07-06 20:15:03.301970 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-07-06 20:15:03.301975 | orchestrator | Sunday 06 July 2025 20:13:53 +0000 (0:00:04.099) 0:05:01.606 *********** 2025-07-06 20:15:03.301981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-06 20:15:03.301992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:15:03.302001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.302045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-06 20:15:03.302051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:15:03.302062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.302085 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.302091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-06 20:15:03.302097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:15:03.302103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.302139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-06 20:15:03.302146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:15:03.302151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-06 20:15:03.302177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.302183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:15:03.302191 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.302197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.302216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-06 20:15:03.302228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-06 20:15:03.302233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:15:03.302285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:15:03.302291 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.302297 | orchestrator | 2025-07-06 20:15:03.302302 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-07-06 20:15:03.302308 | orchestrator | Sunday 06 July 2025 20:13:54 +0000 (0:00:01.210) 0:05:02.817 *********** 2025-07-06 20:15:03.302313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-06 20:15:03.302319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-06 20:15:03.302325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:15:03.302331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:15:03.302337 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.302342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-06 20:15:03.302356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-06 20:15:03.302362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:15:03.302368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:15:03.302374 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.302379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-06 20:15:03.302385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-06 20:15:03.302390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:15:03.302399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-06 20:15:03.302405 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.302410 | orchestrator | 2025-07-06 20:15:03.302416 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-07-06 20:15:03.302421 | orchestrator | Sunday 06 July 2025 20:13:55 +0000 (0:00:01.022) 0:05:03.839 *********** 2025-07-06 20:15:03.302427 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.302432 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.302438 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.302443 | orchestrator | 2025-07-06 20:15:03.302449 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-07-06 20:15:03.302454 | orchestrator | Sunday 06 July 2025 20:13:56 +0000 (0:00:00.486) 0:05:04.325 *********** 2025-07-06 20:15:03.302460 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.302465 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.302471 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.302476 | orchestrator | 2025-07-06 20:15:03.302481 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-07-06 20:15:03.302487 | orchestrator | Sunday 06 July 2025 20:13:58 +0000 (0:00:01.718) 0:05:06.044 *********** 2025-07-06 20:15:03.302492 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.302498 | orchestrator | 2025-07-06 20:15:03.302503 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-07-06 20:15:03.302508 | orchestrator | Sunday 06 July 2025 20:13:59 +0000 (0:00:01.720) 0:05:07.765 *********** 2025-07-06 20:15:03.302515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:15:03.302537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:15:03.302548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-06 20:15:03.302557 | orchestrator | 2025-07-06 20:15:03.302570 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-07-06 20:15:03.302579 | orchestrator | Sunday 06 July 2025 20:14:02 +0000 (0:00:02.599) 0:05:10.364 *********** 2025-07-06 20:15:03.302588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-06 20:15:03.302594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-06 20:15:03.302607 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.302612 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.302621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-06 20:15:03.302628 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.302633 | orchestrator | 2025-07-06 20:15:03.302638 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-07-06 20:15:03.302644 | orchestrator | Sunday 06 July 2025 20:14:02 +0000 (0:00:00.418) 0:05:10.783 *********** 2025-07-06 20:15:03.302650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-06 20:15:03.302655 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.302661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-06 20:15:03.302666 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.302672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-06 20:15:03.302677 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.302682 | orchestrator | 2025-07-06 20:15:03.302688 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-07-06 20:15:03.302693 | orchestrator | Sunday 06 July 2025 20:14:03 +0000 (0:00:01.072) 0:05:11.855 *********** 2025-07-06 20:15:03.302704 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.302712 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.302721 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.302730 | orchestrator | 2025-07-06 20:15:03.302738 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-07-06 20:15:03.302747 | orchestrator | Sunday 06 July 2025 20:14:04 +0000 (0:00:00.455) 0:05:12.311 *********** 2025-07-06 20:15:03.302756 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.302765 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.302774 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.302788 | orchestrator | 2025-07-06 20:15:03.302796 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-07-06 20:15:03.302801 | orchestrator | Sunday 06 July 2025 20:14:05 +0000 (0:00:01.347) 0:05:13.659 *********** 2025-07-06 20:15:03.302806 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:15:03.302811 | orchestrator | 2025-07-06 20:15:03.302816 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-07-06 20:15:03.302820 | orchestrator | Sunday 06 July 2025 20:14:07 +0000 (0:00:01.820) 0:05:15.479 *********** 2025-07-06 20:15:03.302826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.302835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.302840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.302849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.302860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.302869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-06 20:15:03.302877 | orchestrator | 2025-07-06 20:15:03.302885 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-07-06 20:15:03.302896 | orchestrator | Sunday 06 July 2025 20:14:13 +0000 (0:00:06.096) 0:05:21.576 *********** 2025-07-06 20:15:03.302906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.302934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.302949 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.302955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.302960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.302965 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.302973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.302979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-06 20:15:03.302987 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.302992 | orchestrator | 2025-07-06 20:15:03.302997 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-07-06 20:15:03.303005 | orchestrator | Sunday 06 July 2025 20:14:14 +0000 (0:00:00.614) 0:05:22.190 *********** 2025-07-06 20:15:03.303010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303030 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303055 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-06 20:15:03.303083 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303088 | orchestrator | 2025-07-06 20:15:03.303093 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-07-06 20:15:03.303098 | orchestrator | Sunday 06 July 2025 20:14:15 +0000 (0:00:01.691) 0:05:23.881 *********** 2025-07-06 20:15:03.303102 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.303107 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.303116 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.303121 | orchestrator | 2025-07-06 20:15:03.303126 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-07-06 20:15:03.303131 | orchestrator | Sunday 06 July 2025 20:14:17 +0000 (0:00:01.349) 0:05:25.231 *********** 2025-07-06 20:15:03.303136 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.303141 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.303146 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.303150 | orchestrator | 2025-07-06 20:15:03.303155 | orchestrator | TASK [include_role : swift] **************************************************** 2025-07-06 20:15:03.303160 | orchestrator | Sunday 06 July 2025 20:14:19 +0000 (0:00:01.896) 0:05:27.127 *********** 2025-07-06 20:15:03.303165 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303170 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303174 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303180 | orchestrator | 2025-07-06 20:15:03.303184 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-07-06 20:15:03.303189 | orchestrator | Sunday 06 July 2025 20:14:19 +0000 (0:00:00.281) 0:05:27.409 *********** 2025-07-06 20:15:03.303194 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303199 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303204 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303208 | orchestrator | 2025-07-06 20:15:03.303213 | orchestrator | TASK [include_role : trove] **************************************************** 2025-07-06 20:15:03.303221 | orchestrator | Sunday 06 July 2025 20:14:19 +0000 (0:00:00.460) 0:05:27.869 *********** 2025-07-06 20:15:03.303226 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303231 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303236 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303240 | orchestrator | 2025-07-06 20:15:03.303261 | orchestrator | TASK [include_role : venus] **************************************************** 2025-07-06 20:15:03.303266 | orchestrator | Sunday 06 July 2025 20:14:20 +0000 (0:00:00.314) 0:05:28.184 *********** 2025-07-06 20:15:03.303271 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303276 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303281 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303286 | orchestrator | 2025-07-06 20:15:03.303291 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-07-06 20:15:03.303295 | orchestrator | Sunday 06 July 2025 20:14:20 +0000 (0:00:00.327) 0:05:28.511 *********** 2025-07-06 20:15:03.303300 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303305 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303310 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303315 | orchestrator | 2025-07-06 20:15:03.303320 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-07-06 20:15:03.303325 | orchestrator | Sunday 06 July 2025 20:14:20 +0000 (0:00:00.309) 0:05:28.821 *********** 2025-07-06 20:15:03.303329 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303334 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303339 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303344 | orchestrator | 2025-07-06 20:15:03.303349 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-07-06 20:15:03.303354 | orchestrator | Sunday 06 July 2025 20:14:21 +0000 (0:00:00.827) 0:05:29.648 *********** 2025-07-06 20:15:03.303359 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.303364 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.303369 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.303373 | orchestrator | 2025-07-06 20:15:03.303378 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-07-06 20:15:03.303383 | orchestrator | Sunday 06 July 2025 20:14:22 +0000 (0:00:00.695) 0:05:30.343 *********** 2025-07-06 20:15:03.303388 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.303393 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.303398 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.303408 | orchestrator | 2025-07-06 20:15:03.303413 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-07-06 20:15:03.303418 | orchestrator | Sunday 06 July 2025 20:14:22 +0000 (0:00:00.333) 0:05:30.677 *********** 2025-07-06 20:15:03.303423 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.303428 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.303433 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.303437 | orchestrator | 2025-07-06 20:15:03.303442 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-07-06 20:15:03.303447 | orchestrator | Sunday 06 July 2025 20:14:23 +0000 (0:00:01.180) 0:05:31.857 *********** 2025-07-06 20:15:03.303452 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.303457 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.303462 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.303466 | orchestrator | 2025-07-06 20:15:03.303471 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-07-06 20:15:03.303476 | orchestrator | Sunday 06 July 2025 20:14:24 +0000 (0:00:00.913) 0:05:32.771 *********** 2025-07-06 20:15:03.303481 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.303486 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.303491 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.303496 | orchestrator | 2025-07-06 20:15:03.303504 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-07-06 20:15:03.303509 | orchestrator | Sunday 06 July 2025 20:14:25 +0000 (0:00:00.946) 0:05:33.717 *********** 2025-07-06 20:15:03.303514 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.303519 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.303524 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.303529 | orchestrator | 2025-07-06 20:15:03.303534 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-07-06 20:15:03.303538 | orchestrator | Sunday 06 July 2025 20:14:30 +0000 (0:00:04.616) 0:05:38.334 *********** 2025-07-06 20:15:03.303543 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.303548 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.303553 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.303558 | orchestrator | 2025-07-06 20:15:03.303562 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-07-06 20:15:03.303567 | orchestrator | Sunday 06 July 2025 20:14:33 +0000 (0:00:02.752) 0:05:41.087 *********** 2025-07-06 20:15:03.303572 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.303577 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.303584 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.303592 | orchestrator | 2025-07-06 20:15:03.303599 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-07-06 20:15:03.303604 | orchestrator | Sunday 06 July 2025 20:14:46 +0000 (0:00:12.996) 0:05:54.083 *********** 2025-07-06 20:15:03.303608 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.303613 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.303618 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.303622 | orchestrator | 2025-07-06 20:15:03.303627 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-07-06 20:15:03.303632 | orchestrator | Sunday 06 July 2025 20:14:46 +0000 (0:00:00.838) 0:05:54.922 *********** 2025-07-06 20:15:03.303637 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:15:03.303642 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:15:03.303646 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:15:03.303651 | orchestrator | 2025-07-06 20:15:03.303656 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-07-06 20:15:03.303661 | orchestrator | Sunday 06 July 2025 20:14:51 +0000 (0:00:04.521) 0:05:59.443 *********** 2025-07-06 20:15:03.303666 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303670 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303675 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303680 | orchestrator | 2025-07-06 20:15:03.303685 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-07-06 20:15:03.303696 | orchestrator | Sunday 06 July 2025 20:14:51 +0000 (0:00:00.326) 0:05:59.769 *********** 2025-07-06 20:15:03.303701 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303708 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303714 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303718 | orchestrator | 2025-07-06 20:15:03.303723 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-07-06 20:15:03.303728 | orchestrator | Sunday 06 July 2025 20:14:52 +0000 (0:00:00.692) 0:06:00.462 *********** 2025-07-06 20:15:03.303733 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303738 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303742 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303747 | orchestrator | 2025-07-06 20:15:03.303752 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-07-06 20:15:03.303757 | orchestrator | Sunday 06 July 2025 20:14:52 +0000 (0:00:00.344) 0:06:00.806 *********** 2025-07-06 20:15:03.303762 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303766 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303771 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303776 | orchestrator | 2025-07-06 20:15:03.303781 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-07-06 20:15:03.303785 | orchestrator | Sunday 06 July 2025 20:14:53 +0000 (0:00:00.348) 0:06:01.155 *********** 2025-07-06 20:15:03.303792 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303800 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303807 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303814 | orchestrator | 2025-07-06 20:15:03.303820 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-07-06 20:15:03.303827 | orchestrator | Sunday 06 July 2025 20:14:53 +0000 (0:00:00.331) 0:06:01.486 *********** 2025-07-06 20:15:03.303834 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:15:03.303841 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:15:03.303849 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:15:03.303857 | orchestrator | 2025-07-06 20:15:03.303864 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-07-06 20:15:03.303872 | orchestrator | Sunday 06 July 2025 20:14:54 +0000 (0:00:00.709) 0:06:02.196 *********** 2025-07-06 20:15:03.303880 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.303888 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.303894 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.303902 | orchestrator | 2025-07-06 20:15:03.303909 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-07-06 20:15:03.303916 | orchestrator | Sunday 06 July 2025 20:14:59 +0000 (0:00:04.748) 0:06:06.944 *********** 2025-07-06 20:15:03.303924 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:15:03.303930 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:15:03.303938 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:15:03.303944 | orchestrator | 2025-07-06 20:15:03.303952 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:15:03.303959 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-06 20:15:03.303966 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-06 20:15:03.303974 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-06 20:15:03.303980 | orchestrator | 2025-07-06 20:15:03.303987 | orchestrator | 2025-07-06 20:15:03.303998 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:15:03.304006 | orchestrator | Sunday 06 July 2025 20:14:59 +0000 (0:00:00.795) 0:06:07.740 *********** 2025-07-06 20:15:03.304013 | orchestrator | =============================================================================== 2025-07-06 20:15:03.304030 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.00s 2025-07-06 20:15:03.304039 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.10s 2025-07-06 20:15:03.304047 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.48s 2025-07-06 20:15:03.304055 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.31s 2025-07-06 20:15:03.304062 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.75s 2025-07-06 20:15:03.304067 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.62s 2025-07-06 20:15:03.304072 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.54s 2025-07-06 20:15:03.304077 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.54s 2025-07-06 20:15:03.304082 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.52s 2025-07-06 20:15:03.304086 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.45s 2025-07-06 20:15:03.304091 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.36s 2025-07-06 20:15:03.304096 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.21s 2025-07-06 20:15:03.304101 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.15s 2025-07-06 20:15:03.304106 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.10s 2025-07-06 20:15:03.304111 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.95s 2025-07-06 20:15:03.304115 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.94s 2025-07-06 20:15:03.304120 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.81s 2025-07-06 20:15:03.304125 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.80s 2025-07-06 20:15:03.304130 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.78s 2025-07-06 20:15:03.304135 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.49s 2025-07-06 20:15:03.304144 | orchestrator | 2025-07-06 20:15:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:06.320593 | orchestrator | 2025-07-06 20:15:06 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:06.320883 | orchestrator | 2025-07-06 20:15:06 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:06.324621 | orchestrator | 2025-07-06 20:15:06 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:06.324654 | orchestrator | 2025-07-06 20:15:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:09.368080 | orchestrator | 2025-07-06 20:15:09 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:09.368790 | orchestrator | 2025-07-06 20:15:09 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:09.370480 | orchestrator | 2025-07-06 20:15:09 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:09.370516 | orchestrator | 2025-07-06 20:15:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:12.414625 | orchestrator | 2025-07-06 20:15:12 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:12.415054 | orchestrator | 2025-07-06 20:15:12 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:12.417524 | orchestrator | 2025-07-06 20:15:12 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:12.417556 | orchestrator | 2025-07-06 20:15:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:15.457072 | orchestrator | 2025-07-06 20:15:15 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:15.457689 | orchestrator | 2025-07-06 20:15:15 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:15.458358 | orchestrator | 2025-07-06 20:15:15 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:15.458629 | orchestrator | 2025-07-06 20:15:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:18.492713 | orchestrator | 2025-07-06 20:15:18 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:18.494521 | orchestrator | 2025-07-06 20:15:18 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:18.495969 | orchestrator | 2025-07-06 20:15:18 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:18.496028 | orchestrator | 2025-07-06 20:15:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:21.519925 | orchestrator | 2025-07-06 20:15:21 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:21.520045 | orchestrator | 2025-07-06 20:15:21 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:21.520746 | orchestrator | 2025-07-06 20:15:21 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:21.520773 | orchestrator | 2025-07-06 20:15:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:24.571290 | orchestrator | 2025-07-06 20:15:24 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:24.571516 | orchestrator | 2025-07-06 20:15:24 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:24.572174 | orchestrator | 2025-07-06 20:15:24 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:24.572209 | orchestrator | 2025-07-06 20:15:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:27.609082 | orchestrator | 2025-07-06 20:15:27 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:27.613033 | orchestrator | 2025-07-06 20:15:27 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:27.613089 | orchestrator | 2025-07-06 20:15:27 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:27.613102 | orchestrator | 2025-07-06 20:15:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:30.644017 | orchestrator | 2025-07-06 20:15:30 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:30.648658 | orchestrator | 2025-07-06 20:15:30 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:30.649708 | orchestrator | 2025-07-06 20:15:30 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:30.650946 | orchestrator | 2025-07-06 20:15:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:33.692587 | orchestrator | 2025-07-06 20:15:33 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:33.697251 | orchestrator | 2025-07-06 20:15:33 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:33.697847 | orchestrator | 2025-07-06 20:15:33 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:33.697884 | orchestrator | 2025-07-06 20:15:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:36.751350 | orchestrator | 2025-07-06 20:15:36 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:36.754692 | orchestrator | 2025-07-06 20:15:36 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:36.756820 | orchestrator | 2025-07-06 20:15:36 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:36.756916 | orchestrator | 2025-07-06 20:15:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:39.791780 | orchestrator | 2025-07-06 20:15:39 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:39.791907 | orchestrator | 2025-07-06 20:15:39 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:39.792669 | orchestrator | 2025-07-06 20:15:39 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:39.792706 | orchestrator | 2025-07-06 20:15:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:42.831567 | orchestrator | 2025-07-06 20:15:42 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:42.833808 | orchestrator | 2025-07-06 20:15:42 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:42.835993 | orchestrator | 2025-07-06 20:15:42 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:42.836198 | orchestrator | 2025-07-06 20:15:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:45.883259 | orchestrator | 2025-07-06 20:15:45 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:45.883863 | orchestrator | 2025-07-06 20:15:45 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:45.886157 | orchestrator | 2025-07-06 20:15:45 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:45.886315 | orchestrator | 2025-07-06 20:15:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:48.938336 | orchestrator | 2025-07-06 20:15:48 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:48.939456 | orchestrator | 2025-07-06 20:15:48 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:48.941517 | orchestrator | 2025-07-06 20:15:48 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:48.941701 | orchestrator | 2025-07-06 20:15:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:51.985168 | orchestrator | 2025-07-06 20:15:51 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:51.986123 | orchestrator | 2025-07-06 20:15:51 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:51.987539 | orchestrator | 2025-07-06 20:15:51 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:51.987818 | orchestrator | 2025-07-06 20:15:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:55.040001 | orchestrator | 2025-07-06 20:15:55 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:55.041734 | orchestrator | 2025-07-06 20:15:55 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:55.043354 | orchestrator | 2025-07-06 20:15:55 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:55.043398 | orchestrator | 2025-07-06 20:15:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:15:58.086201 | orchestrator | 2025-07-06 20:15:58 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:15:58.087682 | orchestrator | 2025-07-06 20:15:58 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:15:58.088983 | orchestrator | 2025-07-06 20:15:58 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:15:58.089020 | orchestrator | 2025-07-06 20:15:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:01.145421 | orchestrator | 2025-07-06 20:16:01 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:01.146766 | orchestrator | 2025-07-06 20:16:01 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:01.148479 | orchestrator | 2025-07-06 20:16:01 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:01.148517 | orchestrator | 2025-07-06 20:16:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:04.197170 | orchestrator | 2025-07-06 20:16:04 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:04.199355 | orchestrator | 2025-07-06 20:16:04 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:04.201521 | orchestrator | 2025-07-06 20:16:04 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:04.201558 | orchestrator | 2025-07-06 20:16:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:07.245719 | orchestrator | 2025-07-06 20:16:07 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:07.249481 | orchestrator | 2025-07-06 20:16:07 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:07.251910 | orchestrator | 2025-07-06 20:16:07 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:07.252064 | orchestrator | 2025-07-06 20:16:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:10.297769 | orchestrator | 2025-07-06 20:16:10 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:10.299692 | orchestrator | 2025-07-06 20:16:10 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:10.304643 | orchestrator | 2025-07-06 20:16:10 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:10.304694 | orchestrator | 2025-07-06 20:16:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:13.350823 | orchestrator | 2025-07-06 20:16:13 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:13.350931 | orchestrator | 2025-07-06 20:16:13 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:13.351061 | orchestrator | 2025-07-06 20:16:13 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:13.351341 | orchestrator | 2025-07-06 20:16:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:16.395619 | orchestrator | 2025-07-06 20:16:16 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:16.397046 | orchestrator | 2025-07-06 20:16:16 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:16.399134 | orchestrator | 2025-07-06 20:16:16 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:16.399388 | orchestrator | 2025-07-06 20:16:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:19.453978 | orchestrator | 2025-07-06 20:16:19 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:19.455050 | orchestrator | 2025-07-06 20:16:19 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:19.456702 | orchestrator | 2025-07-06 20:16:19 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:19.457261 | orchestrator | 2025-07-06 20:16:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:22.505574 | orchestrator | 2025-07-06 20:16:22 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:22.507056 | orchestrator | 2025-07-06 20:16:22 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:22.509179 | orchestrator | 2025-07-06 20:16:22 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:22.509979 | orchestrator | 2025-07-06 20:16:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:25.551672 | orchestrator | 2025-07-06 20:16:25 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:25.552874 | orchestrator | 2025-07-06 20:16:25 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:25.554087 | orchestrator | 2025-07-06 20:16:25 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:25.554109 | orchestrator | 2025-07-06 20:16:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:28.596132 | orchestrator | 2025-07-06 20:16:28 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:28.596333 | orchestrator | 2025-07-06 20:16:28 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:28.596365 | orchestrator | 2025-07-06 20:16:28 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:28.596386 | orchestrator | 2025-07-06 20:16:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:31.628149 | orchestrator | 2025-07-06 20:16:31 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:31.628481 | orchestrator | 2025-07-06 20:16:31 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:31.628921 | orchestrator | 2025-07-06 20:16:31 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:31.628950 | orchestrator | 2025-07-06 20:16:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:34.667297 | orchestrator | 2025-07-06 20:16:34 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:34.668882 | orchestrator | 2025-07-06 20:16:34 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:34.670735 | orchestrator | 2025-07-06 20:16:34 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:34.670835 | orchestrator | 2025-07-06 20:16:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:37.715173 | orchestrator | 2025-07-06 20:16:37 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:37.716503 | orchestrator | 2025-07-06 20:16:37 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:37.717958 | orchestrator | 2025-07-06 20:16:37 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:37.717984 | orchestrator | 2025-07-06 20:16:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:40.758713 | orchestrator | 2025-07-06 20:16:40 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:40.760155 | orchestrator | 2025-07-06 20:16:40 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:40.762223 | orchestrator | 2025-07-06 20:16:40 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:40.762339 | orchestrator | 2025-07-06 20:16:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:43.804276 | orchestrator | 2025-07-06 20:16:43 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:43.804893 | orchestrator | 2025-07-06 20:16:43 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:43.805490 | orchestrator | 2025-07-06 20:16:43 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:43.805521 | orchestrator | 2025-07-06 20:16:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:46.848953 | orchestrator | 2025-07-06 20:16:46 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:46.851425 | orchestrator | 2025-07-06 20:16:46 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:46.853408 | orchestrator | 2025-07-06 20:16:46 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:46.853448 | orchestrator | 2025-07-06 20:16:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:49.898105 | orchestrator | 2025-07-06 20:16:49 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:49.900454 | orchestrator | 2025-07-06 20:16:49 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:49.902139 | orchestrator | 2025-07-06 20:16:49 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:49.902208 | orchestrator | 2025-07-06 20:16:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:52.947304 | orchestrator | 2025-07-06 20:16:52 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:52.948452 | orchestrator | 2025-07-06 20:16:52 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:52.949890 | orchestrator | 2025-07-06 20:16:52 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:52.949936 | orchestrator | 2025-07-06 20:16:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:55.988663 | orchestrator | 2025-07-06 20:16:55 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:55.989713 | orchestrator | 2025-07-06 20:16:55 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:55.991369 | orchestrator | 2025-07-06 20:16:55 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:55.991438 | orchestrator | 2025-07-06 20:16:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:16:59.040717 | orchestrator | 2025-07-06 20:16:59 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:16:59.042704 | orchestrator | 2025-07-06 20:16:59 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:16:59.044289 | orchestrator | 2025-07-06 20:16:59 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:16:59.044685 | orchestrator | 2025-07-06 20:16:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:02.082169 | orchestrator | 2025-07-06 20:17:02 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:02.083603 | orchestrator | 2025-07-06 20:17:02 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:02.085381 | orchestrator | 2025-07-06 20:17:02 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:17:02.085652 | orchestrator | 2025-07-06 20:17:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:05.140799 | orchestrator | 2025-07-06 20:17:05 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:05.143537 | orchestrator | 2025-07-06 20:17:05 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:05.144715 | orchestrator | 2025-07-06 20:17:05 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:17:05.144767 | orchestrator | 2025-07-06 20:17:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:08.181084 | orchestrator | 2025-07-06 20:17:08 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:08.181260 | orchestrator | 2025-07-06 20:17:08 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:08.182657 | orchestrator | 2025-07-06 20:17:08 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:17:08.182750 | orchestrator | 2025-07-06 20:17:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:11.223980 | orchestrator | 2025-07-06 20:17:11 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:11.225388 | orchestrator | 2025-07-06 20:17:11 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:11.227043 | orchestrator | 2025-07-06 20:17:11 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state STARTED 2025-07-06 20:17:11.227088 | orchestrator | 2025-07-06 20:17:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:14.279610 | orchestrator | 2025-07-06 20:17:14 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:14.281142 | orchestrator | 2025-07-06 20:17:14 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:14.282816 | orchestrator | 2025-07-06 20:17:14 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:14.290773 | orchestrator | 2025-07-06 20:17:14 | INFO  | Task 1ad4a8c2-eb84-47f5-bd79-3e6556e0264a is in state SUCCESS 2025-07-06 20:17:14.292241 | orchestrator | 2025-07-06 20:17:14.292282 | orchestrator | 2025-07-06 20:17:14.292298 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-07-06 20:17:14.292314 | orchestrator | 2025-07-06 20:17:14.292330 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-06 20:17:14.292346 | orchestrator | Sunday 06 July 2025 20:06:07 +0000 (0:00:00.696) 0:00:00.696 *********** 2025-07-06 20:17:14.292363 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.292377 | orchestrator | 2025-07-06 20:17:14.292386 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-06 20:17:14.292394 | orchestrator | Sunday 06 July 2025 20:06:08 +0000 (0:00:01.167) 0:00:01.863 *********** 2025-07-06 20:17:14.292403 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.292413 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.292421 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.292430 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.292438 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.292446 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.292455 | orchestrator | 2025-07-06 20:17:14.292463 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-06 20:17:14.292472 | orchestrator | Sunday 06 July 2025 20:06:10 +0000 (0:00:01.623) 0:00:03.487 *********** 2025-07-06 20:17:14.292480 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.292489 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.292544 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.292592 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.292652 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.292661 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.292670 | orchestrator | 2025-07-06 20:17:14.292678 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-06 20:17:14.292687 | orchestrator | Sunday 06 July 2025 20:06:10 +0000 (0:00:00.793) 0:00:04.281 *********** 2025-07-06 20:17:14.292696 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.292705 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.292714 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.292722 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.292731 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.292745 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.292760 | orchestrator | 2025-07-06 20:17:14.292775 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-06 20:17:14.292790 | orchestrator | Sunday 06 July 2025 20:06:11 +0000 (0:00:00.893) 0:00:05.175 *********** 2025-07-06 20:17:14.292803 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.292816 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.292831 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.292845 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.292859 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.292873 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.292888 | orchestrator | 2025-07-06 20:17:14.292902 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-06 20:17:14.292918 | orchestrator | Sunday 06 July 2025 20:06:12 +0000 (0:00:00.663) 0:00:05.838 *********** 2025-07-06 20:17:14.292932 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.292947 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.292962 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.292977 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.293161 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.293205 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.293220 | orchestrator | 2025-07-06 20:17:14.293308 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-06 20:17:14.293397 | orchestrator | Sunday 06 July 2025 20:06:13 +0000 (0:00:00.606) 0:00:06.445 *********** 2025-07-06 20:17:14.293416 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.293431 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.293446 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.293456 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.293465 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.293474 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.293668 | orchestrator | 2025-07-06 20:17:14.293724 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-06 20:17:14.293741 | orchestrator | Sunday 06 July 2025 20:06:14 +0000 (0:00:01.134) 0:00:07.579 *********** 2025-07-06 20:17:14.293756 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.293804 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.293873 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.293888 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.293901 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.293910 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.293919 | orchestrator | 2025-07-06 20:17:14.293971 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-06 20:17:14.293996 | orchestrator | Sunday 06 July 2025 20:06:15 +0000 (0:00:00.978) 0:00:08.558 *********** 2025-07-06 20:17:14.294005 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.294014 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.294066 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.294075 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.294083 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.294092 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.294101 | orchestrator | 2025-07-06 20:17:14.294109 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-06 20:17:14.294119 | orchestrator | Sunday 06 July 2025 20:06:16 +0000 (0:00:01.043) 0:00:09.601 *********** 2025-07-06 20:17:14.294142 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:17:14.294151 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:17:14.294159 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:17:14.294202 | orchestrator | 2025-07-06 20:17:14.294212 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-06 20:17:14.294220 | orchestrator | Sunday 06 July 2025 20:06:17 +0000 (0:00:00.860) 0:00:10.461 *********** 2025-07-06 20:17:14.294229 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.294238 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.294246 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.294255 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.294263 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.294372 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.294382 | orchestrator | 2025-07-06 20:17:14.294408 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-06 20:17:14.294418 | orchestrator | Sunday 06 July 2025 20:06:18 +0000 (0:00:01.115) 0:00:11.576 *********** 2025-07-06 20:17:14.294426 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:17:14.294435 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:17:14.294444 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:17:14.294452 | orchestrator | 2025-07-06 20:17:14.294461 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-06 20:17:14.294469 | orchestrator | Sunday 06 July 2025 20:06:21 +0000 (0:00:03.463) 0:00:15.040 *********** 2025-07-06 20:17:14.294478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-06 20:17:14.294487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-06 20:17:14.294496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-06 20:17:14.294560 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.294571 | orchestrator | 2025-07-06 20:17:14.294580 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-06 20:17:14.294589 | orchestrator | Sunday 06 July 2025 20:06:22 +0000 (0:00:00.593) 0:00:15.634 *********** 2025-07-06 20:17:14.294600 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.294611 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.294620 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.294630 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.294638 | orchestrator | 2025-07-06 20:17:14.294647 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-06 20:17:14.294759 | orchestrator | Sunday 06 July 2025 20:06:23 +0000 (0:00:00.870) 0:00:16.505 *********** 2025-07-06 20:17:14.294771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.294782 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.294806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.294816 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.294825 | orchestrator | 2025-07-06 20:17:14.294833 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-06 20:17:14.294842 | orchestrator | Sunday 06 July 2025 20:06:23 +0000 (0:00:00.602) 0:00:17.107 *********** 2025-07-06 20:17:14.294859 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-06 20:06:18.919515', 'end': '2025-07-06 20:06:19.204652', 'delta': '0:00:00.285137', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.294871 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-06 20:06:20.239931', 'end': '2025-07-06 20:06:20.530728', 'delta': '0:00:00.290797', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.294881 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-06 20:06:21.193004', 'end': '2025-07-06 20:06:21.479375', 'delta': '0:00:00.286371', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.294890 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.294899 | orchestrator | 2025-07-06 20:17:14.294907 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-06 20:17:14.294916 | orchestrator | Sunday 06 July 2025 20:06:24 +0000 (0:00:00.261) 0:00:17.369 *********** 2025-07-06 20:17:14.294924 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.294933 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.294942 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.294950 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.294958 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.294967 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.294981 | orchestrator | 2025-07-06 20:17:14.294990 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-06 20:17:14.294998 | orchestrator | Sunday 06 July 2025 20:06:26 +0000 (0:00:02.626) 0:00:19.995 *********** 2025-07-06 20:17:14.295007 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:17:14.295015 | orchestrator | 2025-07-06 20:17:14.295024 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-06 20:17:14.295033 | orchestrator | Sunday 06 July 2025 20:06:27 +0000 (0:00:00.882) 0:00:20.877 *********** 2025-07-06 20:17:14.295041 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295050 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.295058 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.295067 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.295076 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.295084 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.295092 | orchestrator | 2025-07-06 20:17:14.295101 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-06 20:17:14.295110 | orchestrator | Sunday 06 July 2025 20:06:28 +0000 (0:00:01.345) 0:00:22.222 *********** 2025-07-06 20:17:14.295118 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295127 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.295135 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.295144 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.295152 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.295160 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.295221 | orchestrator | 2025-07-06 20:17:14.295231 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-06 20:17:14.295240 | orchestrator | Sunday 06 July 2025 20:06:30 +0000 (0:00:01.359) 0:00:23.583 *********** 2025-07-06 20:17:14.295253 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295262 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.295270 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.295278 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.295287 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.295295 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.295304 | orchestrator | 2025-07-06 20:17:14.295312 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-06 20:17:14.295321 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.867) 0:00:24.450 *********** 2025-07-06 20:17:14.295329 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295337 | orchestrator | 2025-07-06 20:17:14.295346 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-06 20:17:14.295355 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.169) 0:00:24.620 *********** 2025-07-06 20:17:14.295363 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295371 | orchestrator | 2025-07-06 20:17:14.295380 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-06 20:17:14.295389 | orchestrator | Sunday 06 July 2025 20:06:31 +0000 (0:00:00.203) 0:00:24.824 *********** 2025-07-06 20:17:14.295397 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295406 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.295414 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.295423 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.295431 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.295439 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.295448 | orchestrator | 2025-07-06 20:17:14.295462 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-06 20:17:14.295471 | orchestrator | Sunday 06 July 2025 20:06:32 +0000 (0:00:00.697) 0:00:25.521 *********** 2025-07-06 20:17:14.295479 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295488 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.295496 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.295505 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.295519 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.295528 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.295536 | orchestrator | 2025-07-06 20:17:14.295545 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-06 20:17:14.295553 | orchestrator | Sunday 06 July 2025 20:06:33 +0000 (0:00:01.516) 0:00:27.038 *********** 2025-07-06 20:17:14.295562 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295571 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.295579 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.295588 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.295596 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.295604 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.295613 | orchestrator | 2025-07-06 20:17:14.295621 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-06 20:17:14.295630 | orchestrator | Sunday 06 July 2025 20:06:34 +0000 (0:00:00.629) 0:00:27.668 *********** 2025-07-06 20:17:14.295639 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295647 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.295656 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.295664 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.295672 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.295681 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.295689 | orchestrator | 2025-07-06 20:17:14.295698 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-06 20:17:14.295707 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:00.881) 0:00:28.549 *********** 2025-07-06 20:17:14.295715 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295723 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.295732 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.295740 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.295749 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.295757 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.295766 | orchestrator | 2025-07-06 20:17:14.295774 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-06 20:17:14.295783 | orchestrator | Sunday 06 July 2025 20:06:35 +0000 (0:00:00.500) 0:00:29.049 *********** 2025-07-06 20:17:14.295791 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295800 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.295808 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.295820 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.295829 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.295837 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.295846 | orchestrator | 2025-07-06 20:17:14.295855 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-06 20:17:14.295863 | orchestrator | Sunday 06 July 2025 20:06:36 +0000 (0:00:00.775) 0:00:29.825 *********** 2025-07-06 20:17:14.295872 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.295880 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.295889 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.295898 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.295906 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.295915 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.295923 | orchestrator | 2025-07-06 20:17:14.295932 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-06 20:17:14.295940 | orchestrator | Sunday 06 July 2025 20:06:37 +0000 (0:00:00.657) 0:00:30.483 *********** 2025-07-06 20:17:14.295955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09-osd--block--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09', 'dm-uuid-LVM-rjbWF69KjZA1lciHg9IvVSUsIBY4Kg80WIL8NwVJDt2W0vvy1hn52SYPacAZrYqR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.295971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15-osd--block--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15', 'dm-uuid-LVM-CBB30BSMi7D1675QBE6Kop3W0221LIf87NC6xDU42NdnRR273XaCkk7Ufim7E7AZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.295987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.295997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296016 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part1', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part14', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part15', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part16', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09-osd--block--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QndfkN-PmDn-892W-SloC-8ojV-i8Ey-uDNKwa', 'scsi-0QEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b', 'scsi-SQEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296129 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15-osd--block--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VC67cJ-cfHh-yd2t-xcB2-EPLx-jHbU-PYUcy2', 'scsi-0QEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111', 'scsi-SQEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31ad454b--c5b7--54ad--acab--5839a456146b-osd--block--31ad454b--c5b7--54ad--acab--5839a456146b', 'dm-uuid-LVM-uunT5FMuh4bQub73Mz82ISkwuGVkewOLiWo1mOL02qkKQjgxsiMP7ETVSaq2tpWH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2eb0e424--9f58--550c--b8cf--76c1b52e517a-osd--block--2eb0e424--9f58--550c--b8cf--76c1b52e517a', 'dm-uuid-LVM-PUe3Aihj8e3x89rT30vRYRaGSeZDlm0iypVWDZzCyZEN8aOrGAcRuQeVn3b2BvIO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd', 'scsi-SQEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296239 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296325 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--31ad454b--c5b7--54ad--acab--5839a456146b-osd--block--31ad454b--c5b7--54ad--acab--5839a456146b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xKngM6-LQyz-Rj7F-7sve-UhFC-KKz3-x6W3RS', 'scsi-0QEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719', 'scsi-SQEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296365 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2eb0e424--9f58--550c--b8cf--76c1b52e517a-osd--block--2eb0e424--9f58--550c--b8cf--76c1b52e517a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aTjS2u-DqZb-KwhC-VTW1-S3tv-oUtt-8jR2oi', 'scsi-0QEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929', 'scsi-SQEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296380 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48', 'scsi-SQEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296398 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.296408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fc1251bd--e592--50b3--b197--385f411a7339-osd--block--fc1251bd--e592--50b3--b197--385f411a7339', 'dm-uuid-LVM-O0gBJzBTc4KRPexI3RumJDTRHsjXEAJqrmIUWQrIiVWfdDBmbIDQHG2A4MuCinJ5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5f0fce0--432f--57fb--bebd--426658f60987-osd--block--b5f0fce0--432f--57fb--bebd--426658f60987', 'dm-uuid-LVM-177gejIMY5lQSIa8RjRlJ1ZfVu8100q5WAzDShduKuNhHFM4DFY36XReOHU4dGHQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296431 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296440 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.296453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296485 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296642 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fc1251bd--e592--50b3--b197--385f411a7339-osd--block--fc1251bd--e592--50b3--b197--385f411a7339'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RyHzxU-aBiw-OJMc-20Q4-Jk3v-wYcp-56OPxc', 'scsi-0QEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332', 'scsi-SQEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9', 'scsi-SQEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part1', 'scsi-SQEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part14', 'scsi-SQEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part15', 'scsi-SQEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part16', 'scsi-SQEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296685 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b5f0fce0--432f--57fb--bebd--426658f60987-osd--block--b5f0fce0--432f--57fb--bebd--426658f60987'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eNwe9v-UwcW-mdfT-UY3c-3ejI-jUS1-pkX8o1', 'scsi-0QEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a', 'scsi-SQEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5', 'scsi-SQEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296803 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.296819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376', 'scsi-SQEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part1', 'scsi-SQEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part14', 'scsi-SQEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part15', 'scsi-SQEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part16', 'scsi-SQEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296834 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.296843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296852 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.296861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:17:14.296950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465', 'scsi-SQEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:17:14.296975 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.296984 | orchestrator | 2025-07-06 20:17:14.296993 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-06 20:17:14.297002 | orchestrator | Sunday 06 July 2025 20:06:38 +0000 (0:00:01.711) 0:00:32.194 *********** 2025-07-06 20:17:14.297011 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09-osd--block--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09', 'dm-uuid-LVM-rjbWF69KjZA1lciHg9IvVSUsIBY4Kg80WIL8NwVJDt2W0vvy1hn52SYPacAZrYqR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15-osd--block--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15', 'dm-uuid-LVM-CBB30BSMi7D1675QBE6Kop3W0221LIf87NC6xDU42NdnRR273XaCkk7Ufim7E7AZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297073 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297096 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31ad454b--c5b7--54ad--acab--5839a456146b-osd--block--31ad454b--c5b7--54ad--acab--5839a456146b', 'dm-uuid-LVM-uunT5FMuh4bQub73Mz82ISkwuGVkewOLiWo1mOL02qkKQjgxsiMP7ETVSaq2tpWH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297120 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2eb0e424--9f58--550c--b8cf--76c1b52e517a-osd--block--2eb0e424--9f58--550c--b8cf--76c1b52e517a', 'dm-uuid-LVM-PUe3Aihj8e3x89rT30vRYRaGSeZDlm0iypVWDZzCyZEN8aOrGAcRuQeVn3b2BvIO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297129 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297138 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297165 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297229 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297256 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297269 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part1', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part14', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part15', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part16', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297303 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09-osd--block--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QndfkN-PmDn-892W-SloC-8ojV-i8Ey-uDNKwa', 'scsi-0QEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b', 'scsi-SQEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297327 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297341 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15-osd--block--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VC67cJ-cfHh-yd2t-xcB2-EPLx-jHbU-PYUcy2', 'scsi-0QEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111', 'scsi-SQEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297356 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd', 'scsi-SQEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297366 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297375 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297395 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297411 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.297420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--31ad454b--c5b7--54ad--acab--5839a456146b-osd--block--31ad454b--c5b7--54ad--acab--5839a456146b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xKngM6-LQyz-Rj7F-7sve-UhFC-KKz3-x6W3RS', 'scsi-0QEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719', 'scsi-SQEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297429 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fc1251bd--e592--50b3--b197--385f411a7339-osd--block--fc1251bd--e592--50b3--b197--385f411a7339', 'dm-uuid-LVM-O0gBJzBTc4KRPexI3RumJDTRHsjXEAJqrmIUWQrIiVWfdDBmbIDQHG2A4MuCinJ5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2eb0e424--9f58--550c--b8cf--76c1b52e517a-osd--block--2eb0e424--9f58--550c--b8cf--76c1b52e517a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aTjS2u-DqZb-KwhC-VTW1-S3tv-oUtt-8jR2oi', 'scsi-0QEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929', 'scsi-SQEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297458 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5f0fce0--432f--57fb--bebd--426658f60987-osd--block--b5f0fce0--432f--57fb--bebd--426658f60987', 'dm-uuid-LVM-177gejIMY5lQSIa8RjRlJ1ZfVu8100q5WAzDShduKuNhHFM4DFY36XReOHU4dGHQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48', 'scsi-SQEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297485 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297494 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297503 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297515 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297536 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297545 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297553 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297562 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297570 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297582 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297590 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297610 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297619 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297632 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9', 'scsi-SQEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part1', 'scsi-SQEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part14', 'scsi-SQEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part15', 'scsi-SQEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part16', 'scsi-SQEMU_QEMU_HARDDISK_df244d07-90ba-451b-8ce4-5a19b3d2e3c9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297647 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.297655 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297668 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297677 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297693 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297710 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holde2025-07-06 20:17:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:14.297875 | orchestrator | rs': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297891 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fc1251bd--e592--50b3--b197--385f411a7339-osd--block--fc1251bd--e592--50b3--b197--385f411a7339'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RyHzxU-aBiw-OJMc-20Q4-Jk3v-wYcp-56OPxc', 'scsi-0QEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332', 'scsi-SQEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297900 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b5f0fce0--432f--57fb--bebd--426658f60987-osd--block--b5f0fce0--432f--57fb--bebd--426658f60987'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eNwe9v-UwcW-mdfT-UY3c-3ejI-jUS1-pkX8o1', 'scsi-0QEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a', 'scsi-SQEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297914 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5', 'scsi-SQEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297929 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297943 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297951 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297960 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297968 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297980 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.297994 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298007 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298062 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298093 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376', 'scsi-SQEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part1', 'scsi-SQEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part14', 'scsi-SQEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part15', 'scsi-SQEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part16', 'scsi-SQEMU_QEMU_HARDDISK_ff95bacd-4fbb-4999-b5b7-64f4756d9376-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298118 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298134 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.298147 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.298155 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.298192 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298203 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298211 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298219 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298240 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298249 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298262 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298270 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298283 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465', 'scsi-SQEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part1', 'scsi-SQEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part14', 'scsi-SQEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part15', 'scsi-SQEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part16', 'scsi-SQEMU_QEMU_HARDDISK_9a47642e-d74b-47af-9cfb-c13ee6345465-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298325 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:17:14.298334 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.298342 | orchestrator | 2025-07-06 20:17:14.298350 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-06 20:17:14.298358 | orchestrator | Sunday 06 July 2025 20:06:39 +0000 (0:00:01.123) 0:00:33.317 *********** 2025-07-06 20:17:14.298370 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.298379 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.298387 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.298395 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.298402 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.298410 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.298418 | orchestrator | 2025-07-06 20:17:14.298426 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-06 20:17:14.298434 | orchestrator | Sunday 06 July 2025 20:06:41 +0000 (0:00:01.371) 0:00:34.689 *********** 2025-07-06 20:17:14.298442 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.298450 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.298457 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.298466 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.298475 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.298484 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.298493 | orchestrator | 2025-07-06 20:17:14.298502 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-06 20:17:14.298512 | orchestrator | Sunday 06 July 2025 20:06:42 +0000 (0:00:00.974) 0:00:35.664 *********** 2025-07-06 20:17:14.298521 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.298529 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.298538 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.298547 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.298556 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.298565 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.298575 | orchestrator | 2025-07-06 20:17:14.298584 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-06 20:17:14.298593 | orchestrator | Sunday 06 July 2025 20:06:43 +0000 (0:00:01.388) 0:00:37.052 *********** 2025-07-06 20:17:14.298602 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.298611 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.298627 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.298636 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.298645 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.298654 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.298663 | orchestrator | 2025-07-06 20:17:14.298672 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-06 20:17:14.298680 | orchestrator | Sunday 06 July 2025 20:06:44 +0000 (0:00:00.906) 0:00:37.959 *********** 2025-07-06 20:17:14.298689 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.298698 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.298707 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.298716 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.298725 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.298734 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.298743 | orchestrator | 2025-07-06 20:17:14.298752 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-06 20:17:14.298761 | orchestrator | Sunday 06 July 2025 20:06:45 +0000 (0:00:00.946) 0:00:38.905 *********** 2025-07-06 20:17:14.298770 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.298778 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.298785 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.298793 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.298801 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.298809 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.298816 | orchestrator | 2025-07-06 20:17:14.298824 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-06 20:17:14.298832 | orchestrator | Sunday 06 July 2025 20:06:46 +0000 (0:00:01.026) 0:00:39.931 *********** 2025-07-06 20:17:14.298840 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-06 20:17:14.298848 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-06 20:17:14.298856 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-06 20:17:14.298863 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-06 20:17:14.298871 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-06 20:17:14.298879 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-06 20:17:14.298887 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-06 20:17:14.298895 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-06 20:17:14.298902 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-07-06 20:17:14.298910 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-07-06 20:17:14.298918 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-06 20:17:14.298926 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-06 20:17:14.298937 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-06 20:17:14.298945 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-07-06 20:17:14.298953 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-07-06 20:17:14.298961 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-06 20:17:14.298969 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-07-06 20:17:14.298977 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-07-06 20:17:14.298984 | orchestrator | 2025-07-06 20:17:14.298992 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-06 20:17:14.299000 | orchestrator | Sunday 06 July 2025 20:06:51 +0000 (0:00:04.538) 0:00:44.470 *********** 2025-07-06 20:17:14.299008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-06 20:17:14.299016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-06 20:17:14.299023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-06 20:17:14.299031 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.299039 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-06 20:17:14.299047 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-06 20:17:14.299060 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-06 20:17:14.299069 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.299082 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-06 20:17:14.299096 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-06 20:17:14.299125 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-06 20:17:14.299138 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.299151 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 20:17:14.299162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 20:17:14.299197 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 20:17:14.299210 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.299223 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-06 20:17:14.299235 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-06 20:17:14.299247 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-06 20:17:14.299260 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.299273 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-06 20:17:14.299286 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-06 20:17:14.299294 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-06 20:17:14.299302 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.299309 | orchestrator | 2025-07-06 20:17:14.299317 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-06 20:17:14.299325 | orchestrator | Sunday 06 July 2025 20:06:51 +0000 (0:00:00.806) 0:00:45.276 *********** 2025-07-06 20:17:14.299332 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.299340 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.299348 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.299356 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.299364 | orchestrator | 2025-07-06 20:17:14.299372 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-06 20:17:14.299381 | orchestrator | Sunday 06 July 2025 20:06:53 +0000 (0:00:01.158) 0:00:46.434 *********** 2025-07-06 20:17:14.299388 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.299396 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.299404 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.299411 | orchestrator | 2025-07-06 20:17:14.299419 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-06 20:17:14.299427 | orchestrator | Sunday 06 July 2025 20:06:53 +0000 (0:00:00.365) 0:00:46.799 *********** 2025-07-06 20:17:14.299434 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.299442 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.299450 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.299457 | orchestrator | 2025-07-06 20:17:14.299465 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-06 20:17:14.299473 | orchestrator | Sunday 06 July 2025 20:06:54 +0000 (0:00:00.675) 0:00:47.475 *********** 2025-07-06 20:17:14.299481 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.299488 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.299496 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.299504 | orchestrator | 2025-07-06 20:17:14.299511 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-06 20:17:14.299519 | orchestrator | Sunday 06 July 2025 20:06:54 +0000 (0:00:00.311) 0:00:47.787 *********** 2025-07-06 20:17:14.299527 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.299535 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.299542 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.299550 | orchestrator | 2025-07-06 20:17:14.299565 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-06 20:17:14.299573 | orchestrator | Sunday 06 July 2025 20:06:54 +0000 (0:00:00.407) 0:00:48.194 *********** 2025-07-06 20:17:14.299581 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.299588 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.299596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.299604 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.299611 | orchestrator | 2025-07-06 20:17:14.299619 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-06 20:17:14.299627 | orchestrator | Sunday 06 July 2025 20:06:55 +0000 (0:00:00.293) 0:00:48.488 *********** 2025-07-06 20:17:14.299635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.299647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.299655 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.299662 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.299670 | orchestrator | 2025-07-06 20:17:14.299677 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-06 20:17:14.299685 | orchestrator | Sunday 06 July 2025 20:06:55 +0000 (0:00:00.332) 0:00:48.821 *********** 2025-07-06 20:17:14.299693 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.299700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.299708 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.299716 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.299723 | orchestrator | 2025-07-06 20:17:14.299731 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-06 20:17:14.299739 | orchestrator | Sunday 06 July 2025 20:06:56 +0000 (0:00:00.644) 0:00:49.465 *********** 2025-07-06 20:17:14.299746 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.299754 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.299762 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.299769 | orchestrator | 2025-07-06 20:17:14.299777 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-06 20:17:14.299784 | orchestrator | Sunday 06 July 2025 20:06:56 +0000 (0:00:00.656) 0:00:50.122 *********** 2025-07-06 20:17:14.299792 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-06 20:17:14.299800 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-06 20:17:14.299807 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-06 20:17:14.299815 | orchestrator | 2025-07-06 20:17:14.299828 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-06 20:17:14.299836 | orchestrator | Sunday 06 July 2025 20:06:57 +0000 (0:00:00.612) 0:00:50.734 *********** 2025-07-06 20:17:14.299844 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:17:14.299852 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:17:14.299859 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:17:14.299867 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-06 20:17:14.299875 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-06 20:17:14.299883 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-06 20:17:14.299890 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-06 20:17:14.299898 | orchestrator | 2025-07-06 20:17:14.299906 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-06 20:17:14.299914 | orchestrator | Sunday 06 July 2025 20:06:58 +0000 (0:00:00.703) 0:00:51.438 *********** 2025-07-06 20:17:14.299921 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:17:14.299929 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:17:14.299942 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:17:14.299949 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-06 20:17:14.299957 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-06 20:17:14.299965 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-06 20:17:14.299973 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-06 20:17:14.299980 | orchestrator | 2025-07-06 20:17:14.299988 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:17:14.299996 | orchestrator | Sunday 06 July 2025 20:06:59 +0000 (0:00:01.907) 0:00:53.345 *********** 2025-07-06 20:17:14.300004 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.300012 | orchestrator | 2025-07-06 20:17:14.300020 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:17:14.300028 | orchestrator | Sunday 06 July 2025 20:07:01 +0000 (0:00:01.043) 0:00:54.388 *********** 2025-07-06 20:17:14.300036 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.300044 | orchestrator | 2025-07-06 20:17:14.300051 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:17:14.300063 | orchestrator | Sunday 06 July 2025 20:07:02 +0000 (0:00:01.521) 0:00:55.910 *********** 2025-07-06 20:17:14.300077 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.300089 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.300102 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.300114 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.300127 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.300141 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.300154 | orchestrator | 2025-07-06 20:17:14.300213 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:17:14.300224 | orchestrator | Sunday 06 July 2025 20:07:03 +0000 (0:00:01.279) 0:00:57.189 *********** 2025-07-06 20:17:14.300232 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.300240 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.300247 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.300255 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.300263 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.300270 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.300278 | orchestrator | 2025-07-06 20:17:14.300286 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:17:14.300304 | orchestrator | Sunday 06 July 2025 20:07:04 +0000 (0:00:01.041) 0:00:58.231 *********** 2025-07-06 20:17:14.300318 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.300331 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.300344 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.300357 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.300371 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.300385 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.300400 | orchestrator | 2025-07-06 20:17:14.300414 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:17:14.300429 | orchestrator | Sunday 06 July 2025 20:07:06 +0000 (0:00:01.359) 0:00:59.591 *********** 2025-07-06 20:17:14.300443 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.300456 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.300470 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.300484 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.300499 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.300514 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.300536 | orchestrator | 2025-07-06 20:17:14.300550 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:17:14.300564 | orchestrator | Sunday 06 July 2025 20:07:07 +0000 (0:00:00.783) 0:01:00.374 *********** 2025-07-06 20:17:14.300578 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.300591 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.300604 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.300618 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.300633 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.300646 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.300661 | orchestrator | 2025-07-06 20:17:14.300675 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:17:14.300697 | orchestrator | Sunday 06 July 2025 20:07:08 +0000 (0:00:01.288) 0:01:01.662 *********** 2025-07-06 20:17:14.300712 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.300725 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.300739 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.300752 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.300765 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.300779 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.300793 | orchestrator | 2025-07-06 20:17:14.300807 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:17:14.300821 | orchestrator | Sunday 06 July 2025 20:07:09 +0000 (0:00:00.752) 0:01:02.415 *********** 2025-07-06 20:17:14.300833 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.300844 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.300855 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.300866 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.300876 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.300888 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.300898 | orchestrator | 2025-07-06 20:17:14.300910 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:17:14.300920 | orchestrator | Sunday 06 July 2025 20:07:10 +0000 (0:00:00.977) 0:01:03.393 *********** 2025-07-06 20:17:14.300931 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.300943 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.300955 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.300966 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.300978 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.300989 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.301000 | orchestrator | 2025-07-06 20:17:14.301011 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:17:14.301023 | orchestrator | Sunday 06 July 2025 20:07:11 +0000 (0:00:01.259) 0:01:04.652 *********** 2025-07-06 20:17:14.301035 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.301046 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.301058 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.301069 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.301081 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.301092 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.301104 | orchestrator | 2025-07-06 20:17:14.301116 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:17:14.301127 | orchestrator | Sunday 06 July 2025 20:07:12 +0000 (0:00:01.700) 0:01:06.352 *********** 2025-07-06 20:17:14.301139 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.301150 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.301162 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.301188 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.301199 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.301211 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.301221 | orchestrator | 2025-07-06 20:17:14.301232 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:17:14.301241 | orchestrator | Sunday 06 July 2025 20:07:13 +0000 (0:00:00.700) 0:01:07.053 *********** 2025-07-06 20:17:14.301258 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.301269 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.301280 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.301290 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.301301 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.301312 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.301324 | orchestrator | 2025-07-06 20:17:14.301335 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:17:14.301344 | orchestrator | Sunday 06 July 2025 20:07:14 +0000 (0:00:01.048) 0:01:08.102 *********** 2025-07-06 20:17:14.301353 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.301364 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.301374 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.301384 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.301394 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.301404 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.301414 | orchestrator | 2025-07-06 20:17:14.301425 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:17:14.301435 | orchestrator | Sunday 06 July 2025 20:07:15 +0000 (0:00:00.994) 0:01:09.097 *********** 2025-07-06 20:17:14.301445 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.301455 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.301466 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.301475 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.301485 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.301494 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.301504 | orchestrator | 2025-07-06 20:17:14.301520 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:17:14.301530 | orchestrator | Sunday 06 July 2025 20:07:16 +0000 (0:00:01.029) 0:01:10.127 *********** 2025-07-06 20:17:14.301541 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.301552 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.301563 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.301573 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.301583 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.301594 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.301604 | orchestrator | 2025-07-06 20:17:14.301615 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:17:14.301625 | orchestrator | Sunday 06 July 2025 20:07:17 +0000 (0:00:00.661) 0:01:10.789 *********** 2025-07-06 20:17:14.301635 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.301646 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.301657 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.301667 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.301677 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.301688 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.301699 | orchestrator | 2025-07-06 20:17:14.301710 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:17:14.301721 | orchestrator | Sunday 06 July 2025 20:07:18 +0000 (0:00:00.814) 0:01:11.603 *********** 2025-07-06 20:17:14.301732 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.301744 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.301755 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.301765 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.301776 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.301786 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.301796 | orchestrator | 2025-07-06 20:17:14.301818 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:17:14.301830 | orchestrator | Sunday 06 July 2025 20:07:18 +0000 (0:00:00.627) 0:01:12.231 *********** 2025-07-06 20:17:14.301840 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.301852 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.301862 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.301873 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.301894 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.301905 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.301916 | orchestrator | 2025-07-06 20:17:14.301926 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:17:14.301938 | orchestrator | Sunday 06 July 2025 20:07:19 +0000 (0:00:00.982) 0:01:13.214 *********** 2025-07-06 20:17:14.301947 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.301957 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.301967 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.301977 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.301987 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.301997 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.302009 | orchestrator | 2025-07-06 20:17:14.302065 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:17:14.302078 | orchestrator | Sunday 06 July 2025 20:07:20 +0000 (0:00:00.612) 0:01:13.826 *********** 2025-07-06 20:17:14.302089 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.302100 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.302111 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.302123 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.302134 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.302146 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.302158 | orchestrator | 2025-07-06 20:17:14.302187 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-07-06 20:17:14.302199 | orchestrator | Sunday 06 July 2025 20:07:21 +0000 (0:00:01.226) 0:01:15.053 *********** 2025-07-06 20:17:14.302210 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.302222 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.302234 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.302245 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.302257 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.302267 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.302280 | orchestrator | 2025-07-06 20:17:14.302291 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-07-06 20:17:14.302303 | orchestrator | Sunday 06 July 2025 20:07:23 +0000 (0:00:01.808) 0:01:16.861 *********** 2025-07-06 20:17:14.302315 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.302326 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.302337 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.302349 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.302361 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.302371 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.302381 | orchestrator | 2025-07-06 20:17:14.302390 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-07-06 20:17:14.302401 | orchestrator | Sunday 06 July 2025 20:07:25 +0000 (0:00:02.086) 0:01:18.948 *********** 2025-07-06 20:17:14.302412 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.302424 | orchestrator | 2025-07-06 20:17:14.302435 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-07-06 20:17:14.302445 | orchestrator | Sunday 06 July 2025 20:07:26 +0000 (0:00:01.147) 0:01:20.096 *********** 2025-07-06 20:17:14.302455 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.302466 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.302476 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.302487 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.302498 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.302508 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.302518 | orchestrator | 2025-07-06 20:17:14.302529 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-07-06 20:17:14.302539 | orchestrator | Sunday 06 July 2025 20:07:27 +0000 (0:00:00.748) 0:01:20.844 *********** 2025-07-06 20:17:14.302552 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.302563 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.302589 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.302601 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.302612 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.302623 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.302634 | orchestrator | 2025-07-06 20:17:14.302652 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-07-06 20:17:14.302664 | orchestrator | Sunday 06 July 2025 20:07:28 +0000 (0:00:00.558) 0:01:21.403 *********** 2025-07-06 20:17:14.302675 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:17:14.302686 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:17:14.302697 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:17:14.302708 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:17:14.302719 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:17:14.302730 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:17:14.302740 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:17:14.302751 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-06 20:17:14.302763 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:17:14.302769 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:17:14.302776 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:17:14.302801 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-06 20:17:14.302808 | orchestrator | 2025-07-06 20:17:14.302814 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-07-06 20:17:14.302821 | orchestrator | Sunday 06 July 2025 20:07:29 +0000 (0:00:01.560) 0:01:22.964 *********** 2025-07-06 20:17:14.302827 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.302834 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.302840 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.302847 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.302853 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.302860 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.302866 | orchestrator | 2025-07-06 20:17:14.302873 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-07-06 20:17:14.302880 | orchestrator | Sunday 06 July 2025 20:07:30 +0000 (0:00:00.889) 0:01:23.854 *********** 2025-07-06 20:17:14.302886 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.302894 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.302906 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.302916 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.302927 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.302938 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.302949 | orchestrator | 2025-07-06 20:17:14.302960 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-07-06 20:17:14.302972 | orchestrator | Sunday 06 July 2025 20:07:31 +0000 (0:00:00.785) 0:01:24.639 *********** 2025-07-06 20:17:14.302983 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.302993 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.303005 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.303016 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.303026 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.303038 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.303049 | orchestrator | 2025-07-06 20:17:14.303061 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-07-06 20:17:14.303072 | orchestrator | Sunday 06 July 2025 20:07:31 +0000 (0:00:00.559) 0:01:25.199 *********** 2025-07-06 20:17:14.303092 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.303099 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.303105 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.303112 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.303118 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.303125 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.303131 | orchestrator | 2025-07-06 20:17:14.303138 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-07-06 20:17:14.303144 | orchestrator | Sunday 06 July 2025 20:07:32 +0000 (0:00:00.747) 0:01:25.947 *********** 2025-07-06 20:17:14.303154 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.303165 | orchestrator | 2025-07-06 20:17:14.303234 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-07-06 20:17:14.303246 | orchestrator | Sunday 06 July 2025 20:07:33 +0000 (0:00:01.124) 0:01:27.072 *********** 2025-07-06 20:17:14.303258 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.303272 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.303284 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.303295 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.303307 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.303317 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.303328 | orchestrator | 2025-07-06 20:17:14.303340 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-07-06 20:17:14.303351 | orchestrator | Sunday 06 July 2025 20:08:41 +0000 (0:01:07.346) 0:02:34.419 *********** 2025-07-06 20:17:14.303362 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:17:14.303373 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:17:14.303384 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:17:14.303395 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.303407 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:17:14.303418 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:17:14.303437 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:17:14.303448 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.303460 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:17:14.303472 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:17:14.303484 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:17:14.303493 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.303500 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:17:14.303506 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:17:14.303513 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:17:14.303519 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.303526 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:17:14.303532 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:17:14.303538 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:17:14.303544 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.303550 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-06 20:17:14.303565 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-06 20:17:14.303571 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-06 20:17:14.303585 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.303591 | orchestrator | 2025-07-06 20:17:14.303597 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-07-06 20:17:14.303604 | orchestrator | Sunday 06 July 2025 20:08:41 +0000 (0:00:00.797) 0:02:35.217 *********** 2025-07-06 20:17:14.303610 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.303616 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.303622 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.303628 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.303634 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.303640 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.303646 | orchestrator | 2025-07-06 20:17:14.303652 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-07-06 20:17:14.303658 | orchestrator | Sunday 06 July 2025 20:08:42 +0000 (0:00:00.562) 0:02:35.779 *********** 2025-07-06 20:17:14.303664 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.303670 | orchestrator | 2025-07-06 20:17:14.303676 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-07-06 20:17:14.303683 | orchestrator | Sunday 06 July 2025 20:08:42 +0000 (0:00:00.145) 0:02:35.925 *********** 2025-07-06 20:17:14.303689 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.303695 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.303701 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.303707 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.303713 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.303719 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.303725 | orchestrator | 2025-07-06 20:17:14.303731 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-07-06 20:17:14.303737 | orchestrator | Sunday 06 July 2025 20:08:43 +0000 (0:00:00.768) 0:02:36.693 *********** 2025-07-06 20:17:14.303743 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.303749 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.303755 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.303761 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.303767 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.303773 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.303779 | orchestrator | 2025-07-06 20:17:14.303786 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-07-06 20:17:14.303792 | orchestrator | Sunday 06 July 2025 20:08:43 +0000 (0:00:00.601) 0:02:37.295 *********** 2025-07-06 20:17:14.303798 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.303804 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.303810 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.303816 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.303822 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.303828 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.303834 | orchestrator | 2025-07-06 20:17:14.303840 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-07-06 20:17:14.303846 | orchestrator | Sunday 06 July 2025 20:08:44 +0000 (0:00:00.798) 0:02:38.093 *********** 2025-07-06 20:17:14.303852 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.303858 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.303864 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.303870 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.303876 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.303882 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.303888 | orchestrator | 2025-07-06 20:17:14.303894 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-07-06 20:17:14.303900 | orchestrator | Sunday 06 July 2025 20:08:46 +0000 (0:00:02.136) 0:02:40.230 *********** 2025-07-06 20:17:14.303906 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.303913 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.303918 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.303930 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.303936 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.303942 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.303948 | orchestrator | 2025-07-06 20:17:14.303954 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-07-06 20:17:14.303960 | orchestrator | Sunday 06 July 2025 20:08:47 +0000 (0:00:00.808) 0:02:41.038 *********** 2025-07-06 20:17:14.303967 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.303974 | orchestrator | 2025-07-06 20:17:14.303984 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-07-06 20:17:14.303990 | orchestrator | Sunday 06 July 2025 20:08:48 +0000 (0:00:01.176) 0:02:42.214 *********** 2025-07-06 20:17:14.303996 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.304002 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.304008 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.304014 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.304020 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.304026 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.304033 | orchestrator | 2025-07-06 20:17:14.304039 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-07-06 20:17:14.304045 | orchestrator | Sunday 06 July 2025 20:08:49 +0000 (0:00:00.583) 0:02:42.798 *********** 2025-07-06 20:17:14.304051 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.304059 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.304069 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.304079 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.304089 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.304098 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.304109 | orchestrator | 2025-07-06 20:17:14.304120 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-07-06 20:17:14.304130 | orchestrator | Sunday 06 July 2025 20:08:50 +0000 (0:00:00.870) 0:02:43.668 *********** 2025-07-06 20:17:14.304138 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.304145 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.304151 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.304157 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.304163 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.304197 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.304206 | orchestrator | 2025-07-06 20:17:14.304212 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-07-06 20:17:14.304218 | orchestrator | Sunday 06 July 2025 20:08:50 +0000 (0:00:00.616) 0:02:44.284 *********** 2025-07-06 20:17:14.304225 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.304231 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.304237 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.304243 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.304249 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.304255 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.304261 | orchestrator | 2025-07-06 20:17:14.304267 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-07-06 20:17:14.304273 | orchestrator | Sunday 06 July 2025 20:08:51 +0000 (0:00:00.830) 0:02:45.114 *********** 2025-07-06 20:17:14.304279 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.304285 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.304291 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.304297 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.304303 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.304309 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.304315 | orchestrator | 2025-07-06 20:17:14.304322 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-07-06 20:17:14.304328 | orchestrator | Sunday 06 July 2025 20:08:52 +0000 (0:00:00.544) 0:02:45.659 *********** 2025-07-06 20:17:14.304340 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.304346 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.304352 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.304358 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.304364 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.304370 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.304376 | orchestrator | 2025-07-06 20:17:14.304382 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-07-06 20:17:14.304388 | orchestrator | Sunday 06 July 2025 20:08:52 +0000 (0:00:00.587) 0:02:46.246 *********** 2025-07-06 20:17:14.304395 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.304401 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.304406 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.304413 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.304419 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.304425 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.304431 | orchestrator | 2025-07-06 20:17:14.304437 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-07-06 20:17:14.304443 | orchestrator | Sunday 06 July 2025 20:08:53 +0000 (0:00:00.442) 0:02:46.689 *********** 2025-07-06 20:17:14.304449 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.304455 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.304461 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.304467 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.304473 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.304479 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.304485 | orchestrator | 2025-07-06 20:17:14.304491 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-07-06 20:17:14.304498 | orchestrator | Sunday 06 July 2025 20:08:53 +0000 (0:00:00.607) 0:02:47.296 *********** 2025-07-06 20:17:14.304504 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.304510 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.304516 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.304522 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.304528 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.304534 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.304540 | orchestrator | 2025-07-06 20:17:14.304546 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-07-06 20:17:14.304552 | orchestrator | Sunday 06 July 2025 20:08:55 +0000 (0:00:01.149) 0:02:48.446 *********** 2025-07-06 20:17:14.304558 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.304565 | orchestrator | 2025-07-06 20:17:14.304571 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-07-06 20:17:14.304577 | orchestrator | Sunday 06 July 2025 20:08:56 +0000 (0:00:00.983) 0:02:49.430 *********** 2025-07-06 20:17:14.304583 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-07-06 20:17:14.304590 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-07-06 20:17:14.304600 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-07-06 20:17:14.304606 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-07-06 20:17:14.304612 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-07-06 20:17:14.304618 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-07-06 20:17:14.304624 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-07-06 20:17:14.304630 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-07-06 20:17:14.304636 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-07-06 20:17:14.304642 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-07-06 20:17:14.304648 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-07-06 20:17:14.304655 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-07-06 20:17:14.304666 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-07-06 20:17:14.304672 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-07-06 20:17:14.304678 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-07-06 20:17:14.304684 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-07-06 20:17:14.304690 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-07-06 20:17:14.304696 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-07-06 20:17:14.304702 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-07-06 20:17:14.304708 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-07-06 20:17:14.304718 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-07-06 20:17:14.304724 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-07-06 20:17:14.304731 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-07-06 20:17:14.304737 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-07-06 20:17:14.304743 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-07-06 20:17:14.304749 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-07-06 20:17:14.304755 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-07-06 20:17:14.304761 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-07-06 20:17:14.304767 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-07-06 20:17:14.304773 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-07-06 20:17:14.304779 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-07-06 20:17:14.304785 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-07-06 20:17:14.304791 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-07-06 20:17:14.304797 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-07-06 20:17:14.304803 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-07-06 20:17:14.304809 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-07-06 20:17:14.304815 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-07-06 20:17:14.304822 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-07-06 20:17:14.304828 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-07-06 20:17:14.304834 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:17:14.304840 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-07-06 20:17:14.304846 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-07-06 20:17:14.304852 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-07-06 20:17:14.304858 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:17:14.304864 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:17:14.304870 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:17:14.304876 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:17:14.304882 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:17:14.304888 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-07-06 20:17:14.304894 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:17:14.304900 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:17:14.304907 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:17:14.304913 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:17:14.304919 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:17:14.304931 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-06 20:17:14.304937 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:17:14.304943 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:17:14.304949 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:17:14.304955 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:17:14.304961 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:17:14.304967 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:17:14.304974 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:17:14.304980 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-06 20:17:14.304989 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:17:14.304995 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:17:14.305001 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:17:14.305007 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:17:14.305013 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-06 20:17:14.305020 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:17:14.305026 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:17:14.305032 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:17:14.305038 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:17:14.305044 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:17:14.305050 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-06 20:17:14.305056 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:17:14.305062 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:17:14.305068 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:17:14.305074 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:17:14.305083 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:17:14.305090 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:17:14.305096 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-06 20:17:14.305102 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-07-06 20:17:14.305108 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:17:14.305114 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-07-06 20:17:14.305121 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:17:14.305127 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-06 20:17:14.305133 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-07-06 20:17:14.305139 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-07-06 20:17:14.305145 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-07-06 20:17:14.305151 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-07-06 20:17:14.305157 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-07-06 20:17:14.305163 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-07-06 20:17:14.305186 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-07-06 20:17:14.305193 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-07-06 20:17:14.305199 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-07-06 20:17:14.305210 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-07-06 20:17:14.305216 | orchestrator | 2025-07-06 20:17:14.305222 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-07-06 20:17:14.305228 | orchestrator | Sunday 06 July 2025 20:09:02 +0000 (0:00:06.478) 0:02:55.909 *********** 2025-07-06 20:17:14.305234 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305241 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305247 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305253 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.305260 | orchestrator | 2025-07-06 20:17:14.305266 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-07-06 20:17:14.305272 | orchestrator | Sunday 06 July 2025 20:09:03 +0000 (0:00:01.175) 0:02:57.084 *********** 2025-07-06 20:17:14.305278 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.305285 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.305291 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.305297 | orchestrator | 2025-07-06 20:17:14.305303 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-07-06 20:17:14.305309 | orchestrator | Sunday 06 July 2025 20:09:04 +0000 (0:00:00.753) 0:02:57.837 *********** 2025-07-06 20:17:14.305315 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.305322 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.305328 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.305334 | orchestrator | 2025-07-06 20:17:14.305340 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-07-06 20:17:14.305346 | orchestrator | Sunday 06 July 2025 20:09:06 +0000 (0:00:01.618) 0:02:59.456 *********** 2025-07-06 20:17:14.305356 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.305363 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.305369 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.305375 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305381 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305387 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305393 | orchestrator | 2025-07-06 20:17:14.305399 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-07-06 20:17:14.305406 | orchestrator | Sunday 06 July 2025 20:09:06 +0000 (0:00:00.695) 0:03:00.152 *********** 2025-07-06 20:17:14.305412 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.305418 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.305424 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.305430 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305436 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305442 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305448 | orchestrator | 2025-07-06 20:17:14.305454 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-07-06 20:17:14.305460 | orchestrator | Sunday 06 July 2025 20:09:07 +0000 (0:00:00.904) 0:03:01.056 *********** 2025-07-06 20:17:14.305466 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.305472 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.305478 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.305484 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305490 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305503 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305509 | orchestrator | 2025-07-06 20:17:14.305515 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-07-06 20:17:14.305521 | orchestrator | Sunday 06 July 2025 20:09:08 +0000 (0:00:00.696) 0:03:01.753 *********** 2025-07-06 20:17:14.305531 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.305538 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.305544 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.305550 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305556 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305562 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305568 | orchestrator | 2025-07-06 20:17:14.305574 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-07-06 20:17:14.305580 | orchestrator | Sunday 06 July 2025 20:09:09 +0000 (0:00:00.825) 0:03:02.579 *********** 2025-07-06 20:17:14.305587 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.305593 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.305599 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.305605 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305611 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305617 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305623 | orchestrator | 2025-07-06 20:17:14.305629 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-07-06 20:17:14.305636 | orchestrator | Sunday 06 July 2025 20:09:09 +0000 (0:00:00.509) 0:03:03.089 *********** 2025-07-06 20:17:14.305642 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.305648 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.305654 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.305660 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305666 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305672 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305678 | orchestrator | 2025-07-06 20:17:14.305684 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-07-06 20:17:14.305691 | orchestrator | Sunday 06 July 2025 20:09:10 +0000 (0:00:00.703) 0:03:03.793 *********** 2025-07-06 20:17:14.305697 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.305703 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.305709 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.305715 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305721 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305727 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305733 | orchestrator | 2025-07-06 20:17:14.305740 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-07-06 20:17:14.305746 | orchestrator | Sunday 06 July 2025 20:09:10 +0000 (0:00:00.522) 0:03:04.315 *********** 2025-07-06 20:17:14.305752 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.305758 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.305764 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.305770 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305776 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305782 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305788 | orchestrator | 2025-07-06 20:17:14.305795 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-07-06 20:17:14.305801 | orchestrator | Sunday 06 July 2025 20:09:11 +0000 (0:00:00.575) 0:03:04.890 *********** 2025-07-06 20:17:14.305807 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305813 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305819 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305825 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.305831 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.305837 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.305851 | orchestrator | 2025-07-06 20:17:14.305857 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-07-06 20:17:14.305863 | orchestrator | Sunday 06 July 2025 20:09:14 +0000 (0:00:02.944) 0:03:07.835 *********** 2025-07-06 20:17:14.305870 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.305876 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.305882 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.305888 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305894 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305900 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305906 | orchestrator | 2025-07-06 20:17:14.305913 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-07-06 20:17:14.305919 | orchestrator | Sunday 06 July 2025 20:09:15 +0000 (0:00:00.644) 0:03:08.480 *********** 2025-07-06 20:17:14.305925 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.305931 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.305937 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.305943 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.305949 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.305959 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.305965 | orchestrator | 2025-07-06 20:17:14.305971 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-07-06 20:17:14.305977 | orchestrator | Sunday 06 July 2025 20:09:15 +0000 (0:00:00.534) 0:03:09.014 *********** 2025-07-06 20:17:14.305983 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.305989 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.305995 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.306001 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306008 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306014 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306045 | orchestrator | 2025-07-06 20:17:14.306051 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-07-06 20:17:14.306057 | orchestrator | Sunday 06 July 2025 20:09:16 +0000 (0:00:00.841) 0:03:09.856 *********** 2025-07-06 20:17:14.306064 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.306070 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.306076 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.306082 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306089 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306095 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306101 | orchestrator | 2025-07-06 20:17:14.306111 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-07-06 20:17:14.306117 | orchestrator | Sunday 06 July 2025 20:09:17 +0000 (0:00:00.642) 0:03:10.498 *********** 2025-07-06 20:17:14.306125 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-07-06 20:17:14.306134 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-07-06 20:17:14.306142 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.306148 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-07-06 20:17:14.306160 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-07-06 20:17:14.306166 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.306188 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-07-06 20:17:14.306194 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-07-06 20:17:14.306200 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.306207 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306213 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306219 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306225 | orchestrator | 2025-07-06 20:17:14.306231 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-07-06 20:17:14.306237 | orchestrator | Sunday 06 July 2025 20:09:17 +0000 (0:00:00.833) 0:03:11.332 *********** 2025-07-06 20:17:14.306243 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.306249 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.306255 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.306261 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306267 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306273 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306279 | orchestrator | 2025-07-06 20:17:14.306285 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-07-06 20:17:14.306292 | orchestrator | Sunday 06 July 2025 20:09:18 +0000 (0:00:00.624) 0:03:11.957 *********** 2025-07-06 20:17:14.306298 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.306304 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.306310 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.306316 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306325 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306331 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306337 | orchestrator | 2025-07-06 20:17:14.306344 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-06 20:17:14.306350 | orchestrator | Sunday 06 July 2025 20:09:19 +0000 (0:00:00.779) 0:03:12.736 *********** 2025-07-06 20:17:14.306356 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.306362 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.306368 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.306374 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306380 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306386 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306392 | orchestrator | 2025-07-06 20:17:14.306398 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-06 20:17:14.306405 | orchestrator | Sunday 06 July 2025 20:09:19 +0000 (0:00:00.594) 0:03:13.331 *********** 2025-07-06 20:17:14.306411 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.306417 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.306423 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.306429 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306435 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306446 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306452 | orchestrator | 2025-07-06 20:17:14.306458 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-06 20:17:14.306465 | orchestrator | Sunday 06 July 2025 20:09:20 +0000 (0:00:00.908) 0:03:14.240 *********** 2025-07-06 20:17:14.306471 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.306488 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.306494 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.306501 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306507 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306513 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306519 | orchestrator | 2025-07-06 20:17:14.306525 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-06 20:17:14.306531 | orchestrator | Sunday 06 July 2025 20:09:21 +0000 (0:00:00.636) 0:03:14.876 *********** 2025-07-06 20:17:14.306537 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.306543 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.306549 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.306555 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306561 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306567 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306573 | orchestrator | 2025-07-06 20:17:14.306579 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-06 20:17:14.306585 | orchestrator | Sunday 06 July 2025 20:09:22 +0000 (0:00:00.855) 0:03:15.732 *********** 2025-07-06 20:17:14.306592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.306598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.306604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.306610 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.306616 | orchestrator | 2025-07-06 20:17:14.306622 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-06 20:17:14.306628 | orchestrator | Sunday 06 July 2025 20:09:22 +0000 (0:00:00.442) 0:03:16.175 *********** 2025-07-06 20:17:14.306634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.306640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.306646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.306652 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.306658 | orchestrator | 2025-07-06 20:17:14.306664 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-06 20:17:14.306670 | orchestrator | Sunday 06 July 2025 20:09:23 +0000 (0:00:00.409) 0:03:16.584 *********** 2025-07-06 20:17:14.306677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.306683 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.306689 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.306695 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.306701 | orchestrator | 2025-07-06 20:17:14.306707 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-06 20:17:14.306713 | orchestrator | Sunday 06 July 2025 20:09:23 +0000 (0:00:00.428) 0:03:17.013 *********** 2025-07-06 20:17:14.306719 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.306725 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.306731 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.306738 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306744 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306750 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306756 | orchestrator | 2025-07-06 20:17:14.306762 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-06 20:17:14.306768 | orchestrator | Sunday 06 July 2025 20:09:24 +0000 (0:00:00.672) 0:03:17.685 *********** 2025-07-06 20:17:14.306774 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-06 20:17:14.306785 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-06 20:17:14.306791 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-07-06 20:17:14.306797 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-06 20:17:14.306803 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.306809 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-07-06 20:17:14.306815 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.306821 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-07-06 20:17:14.306827 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.306833 | orchestrator | 2025-07-06 20:17:14.306839 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-07-06 20:17:14.306845 | orchestrator | Sunday 06 July 2025 20:09:26 +0000 (0:00:02.052) 0:03:19.738 *********** 2025-07-06 20:17:14.306851 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.306858 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.306864 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.306870 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.306879 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.306885 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.306891 | orchestrator | 2025-07-06 20:17:14.306897 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:17:14.306903 | orchestrator | Sunday 06 July 2025 20:09:28 +0000 (0:00:02.508) 0:03:22.246 *********** 2025-07-06 20:17:14.306909 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.306915 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.306922 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.306928 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.306934 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.306940 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.306946 | orchestrator | 2025-07-06 20:17:14.306952 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-06 20:17:14.306958 | orchestrator | Sunday 06 July 2025 20:09:29 +0000 (0:00:00.996) 0:03:23.243 *********** 2025-07-06 20:17:14.306964 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.306970 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.306976 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.306982 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.306988 | orchestrator | 2025-07-06 20:17:14.306995 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-06 20:17:14.307001 | orchestrator | Sunday 06 July 2025 20:09:30 +0000 (0:00:00.943) 0:03:24.186 *********** 2025-07-06 20:17:14.307007 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.307013 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.307019 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.307025 | orchestrator | 2025-07-06 20:17:14.307035 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-06 20:17:14.307042 | orchestrator | Sunday 06 July 2025 20:09:31 +0000 (0:00:00.343) 0:03:24.530 *********** 2025-07-06 20:17:14.307048 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.307054 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.307060 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.307066 | orchestrator | 2025-07-06 20:17:14.307072 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-06 20:17:14.307078 | orchestrator | Sunday 06 July 2025 20:09:32 +0000 (0:00:01.356) 0:03:25.887 *********** 2025-07-06 20:17:14.307084 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 20:17:14.307091 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 20:17:14.307097 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 20:17:14.307103 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.307109 | orchestrator | 2025-07-06 20:17:14.307115 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-06 20:17:14.307126 | orchestrator | Sunday 06 July 2025 20:09:33 +0000 (0:00:00.655) 0:03:26.543 *********** 2025-07-06 20:17:14.307132 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.307138 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.307144 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.307151 | orchestrator | 2025-07-06 20:17:14.307157 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-06 20:17:14.307163 | orchestrator | Sunday 06 July 2025 20:09:33 +0000 (0:00:00.348) 0:03:26.891 *********** 2025-07-06 20:17:14.307211 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.307222 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.307233 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.307241 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.307247 | orchestrator | 2025-07-06 20:17:14.307253 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-06 20:17:14.307259 | orchestrator | Sunday 06 July 2025 20:09:34 +0000 (0:00:01.017) 0:03:27.909 *********** 2025-07-06 20:17:14.307265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.307272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.307278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.307284 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307290 | orchestrator | 2025-07-06 20:17:14.307296 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-06 20:17:14.307302 | orchestrator | Sunday 06 July 2025 20:09:34 +0000 (0:00:00.399) 0:03:28.308 *********** 2025-07-06 20:17:14.307308 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307314 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.307321 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.307327 | orchestrator | 2025-07-06 20:17:14.307333 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-06 20:17:14.307339 | orchestrator | Sunday 06 July 2025 20:09:35 +0000 (0:00:00.376) 0:03:28.685 *********** 2025-07-06 20:17:14.307345 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307351 | orchestrator | 2025-07-06 20:17:14.307357 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-06 20:17:14.307363 | orchestrator | Sunday 06 July 2025 20:09:35 +0000 (0:00:00.228) 0:03:28.914 *********** 2025-07-06 20:17:14.307369 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307375 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.307381 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.307387 | orchestrator | 2025-07-06 20:17:14.307393 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-06 20:17:14.307399 | orchestrator | Sunday 06 July 2025 20:09:35 +0000 (0:00:00.342) 0:03:29.257 *********** 2025-07-06 20:17:14.307405 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307411 | orchestrator | 2025-07-06 20:17:14.307417 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-06 20:17:14.307424 | orchestrator | Sunday 06 July 2025 20:09:36 +0000 (0:00:00.239) 0:03:29.496 *********** 2025-07-06 20:17:14.307430 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307436 | orchestrator | 2025-07-06 20:17:14.307442 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-06 20:17:14.307452 | orchestrator | Sunday 06 July 2025 20:09:36 +0000 (0:00:00.229) 0:03:29.726 *********** 2025-07-06 20:17:14.307458 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307464 | orchestrator | 2025-07-06 20:17:14.307470 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-06 20:17:14.307477 | orchestrator | Sunday 06 July 2025 20:09:36 +0000 (0:00:00.361) 0:03:30.088 *********** 2025-07-06 20:17:14.307483 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307489 | orchestrator | 2025-07-06 20:17:14.307495 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-06 20:17:14.307507 | orchestrator | Sunday 06 July 2025 20:09:36 +0000 (0:00:00.219) 0:03:30.307 *********** 2025-07-06 20:17:14.307513 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307519 | orchestrator | 2025-07-06 20:17:14.307525 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-06 20:17:14.307531 | orchestrator | Sunday 06 July 2025 20:09:37 +0000 (0:00:00.248) 0:03:30.556 *********** 2025-07-06 20:17:14.307537 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.307543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.307549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.307555 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307561 | orchestrator | 2025-07-06 20:17:14.307568 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-06 20:17:14.307574 | orchestrator | Sunday 06 July 2025 20:09:37 +0000 (0:00:00.428) 0:03:30.985 *********** 2025-07-06 20:17:14.307580 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307590 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.307597 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.307603 | orchestrator | 2025-07-06 20:17:14.307609 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-06 20:17:14.307615 | orchestrator | Sunday 06 July 2025 20:09:37 +0000 (0:00:00.330) 0:03:31.315 *********** 2025-07-06 20:17:14.307621 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307627 | orchestrator | 2025-07-06 20:17:14.307633 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-06 20:17:14.307640 | orchestrator | Sunday 06 July 2025 20:09:38 +0000 (0:00:00.215) 0:03:31.531 *********** 2025-07-06 20:17:14.307646 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307652 | orchestrator | 2025-07-06 20:17:14.307658 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-06 20:17:14.307664 | orchestrator | Sunday 06 July 2025 20:09:38 +0000 (0:00:00.209) 0:03:31.740 *********** 2025-07-06 20:17:14.307670 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.307677 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.307683 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.307689 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.307695 | orchestrator | 2025-07-06 20:17:14.307701 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-06 20:17:14.307707 | orchestrator | Sunday 06 July 2025 20:09:39 +0000 (0:00:01.038) 0:03:32.778 *********** 2025-07-06 20:17:14.307713 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.307719 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.307726 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.307731 | orchestrator | 2025-07-06 20:17:14.307736 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-06 20:17:14.307742 | orchestrator | Sunday 06 July 2025 20:09:39 +0000 (0:00:00.337) 0:03:33.116 *********** 2025-07-06 20:17:14.307747 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.307753 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.307758 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.307763 | orchestrator | 2025-07-06 20:17:14.307769 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-06 20:17:14.307774 | orchestrator | Sunday 06 July 2025 20:09:40 +0000 (0:00:01.213) 0:03:34.329 *********** 2025-07-06 20:17:14.307779 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.307785 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.307790 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.307795 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307801 | orchestrator | 2025-07-06 20:17:14.307806 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-06 20:17:14.307816 | orchestrator | Sunday 06 July 2025 20:09:42 +0000 (0:00:01.112) 0:03:35.442 *********** 2025-07-06 20:17:14.307821 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.307827 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.307832 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.307837 | orchestrator | 2025-07-06 20:17:14.307843 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-06 20:17:14.307848 | orchestrator | Sunday 06 July 2025 20:09:42 +0000 (0:00:00.343) 0:03:35.785 *********** 2025-07-06 20:17:14.307853 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.307859 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.307864 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.307869 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.307875 | orchestrator | 2025-07-06 20:17:14.307880 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-06 20:17:14.307885 | orchestrator | Sunday 06 July 2025 20:09:43 +0000 (0:00:01.008) 0:03:36.793 *********** 2025-07-06 20:17:14.307891 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.307896 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.307902 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.307907 | orchestrator | 2025-07-06 20:17:14.307912 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-06 20:17:14.307918 | orchestrator | Sunday 06 July 2025 20:09:43 +0000 (0:00:00.351) 0:03:37.145 *********** 2025-07-06 20:17:14.307923 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.307928 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.307934 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.307939 | orchestrator | 2025-07-06 20:17:14.307947 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-06 20:17:14.307953 | orchestrator | Sunday 06 July 2025 20:09:44 +0000 (0:00:01.176) 0:03:38.321 *********** 2025-07-06 20:17:14.307958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.307964 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.307969 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.307974 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.307979 | orchestrator | 2025-07-06 20:17:14.307985 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-06 20:17:14.307990 | orchestrator | Sunday 06 July 2025 20:09:45 +0000 (0:00:00.878) 0:03:39.199 *********** 2025-07-06 20:17:14.307996 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.308001 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.308006 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.308011 | orchestrator | 2025-07-06 20:17:14.308017 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-07-06 20:17:14.308022 | orchestrator | Sunday 06 July 2025 20:09:46 +0000 (0:00:00.347) 0:03:39.547 *********** 2025-07-06 20:17:14.308028 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.308033 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.308038 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.308044 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308049 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308055 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308064 | orchestrator | 2025-07-06 20:17:14.308073 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-06 20:17:14.308086 | orchestrator | Sunday 06 July 2025 20:09:47 +0000 (0:00:00.877) 0:03:40.424 *********** 2025-07-06 20:17:14.308095 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.308104 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.308114 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.308123 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.308133 | orchestrator | 2025-07-06 20:17:14.308143 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-06 20:17:14.308148 | orchestrator | Sunday 06 July 2025 20:09:48 +0000 (0:00:01.072) 0:03:41.496 *********** 2025-07-06 20:17:14.308154 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308159 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308165 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.308187 | orchestrator | 2025-07-06 20:17:14.308192 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-06 20:17:14.308198 | orchestrator | Sunday 06 July 2025 20:09:48 +0000 (0:00:00.350) 0:03:41.847 *********** 2025-07-06 20:17:14.308203 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.308209 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.308214 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.308219 | orchestrator | 2025-07-06 20:17:14.308224 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-06 20:17:14.308230 | orchestrator | Sunday 06 July 2025 20:09:49 +0000 (0:00:01.267) 0:03:43.115 *********** 2025-07-06 20:17:14.308235 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 20:17:14.308240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 20:17:14.308246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 20:17:14.308251 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308256 | orchestrator | 2025-07-06 20:17:14.308262 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-06 20:17:14.308267 | orchestrator | Sunday 06 July 2025 20:09:50 +0000 (0:00:00.857) 0:03:43.973 *********** 2025-07-06 20:17:14.308272 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308277 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308283 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.308288 | orchestrator | 2025-07-06 20:17:14.308293 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-07-06 20:17:14.308299 | orchestrator | 2025-07-06 20:17:14.308304 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:17:14.308309 | orchestrator | Sunday 06 July 2025 20:09:51 +0000 (0:00:00.796) 0:03:44.769 *********** 2025-07-06 20:17:14.308315 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.308320 | orchestrator | 2025-07-06 20:17:14.308326 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:17:14.308331 | orchestrator | Sunday 06 July 2025 20:09:51 +0000 (0:00:00.498) 0:03:45.267 *********** 2025-07-06 20:17:14.308336 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.308342 | orchestrator | 2025-07-06 20:17:14.308347 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:17:14.308352 | orchestrator | Sunday 06 July 2025 20:09:52 +0000 (0:00:00.688) 0:03:45.955 *********** 2025-07-06 20:17:14.308357 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308363 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308368 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.308373 | orchestrator | 2025-07-06 20:17:14.308379 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:17:14.308384 | orchestrator | Sunday 06 July 2025 20:09:53 +0000 (0:00:00.682) 0:03:46.638 *********** 2025-07-06 20:17:14.308389 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308395 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308400 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308405 | orchestrator | 2025-07-06 20:17:14.308411 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:17:14.308416 | orchestrator | Sunday 06 July 2025 20:09:53 +0000 (0:00:00.288) 0:03:46.927 *********** 2025-07-06 20:17:14.308421 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308427 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308439 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308444 | orchestrator | 2025-07-06 20:17:14.308450 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:17:14.308455 | orchestrator | Sunday 06 July 2025 20:09:53 +0000 (0:00:00.287) 0:03:47.214 *********** 2025-07-06 20:17:14.308460 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308466 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308471 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308476 | orchestrator | 2025-07-06 20:17:14.308482 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:17:14.308487 | orchestrator | Sunday 06 July 2025 20:09:54 +0000 (0:00:00.540) 0:03:47.754 *********** 2025-07-06 20:17:14.308492 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308498 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308503 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.308508 | orchestrator | 2025-07-06 20:17:14.308514 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:17:14.308519 | orchestrator | Sunday 06 July 2025 20:09:55 +0000 (0:00:00.700) 0:03:48.454 *********** 2025-07-06 20:17:14.308524 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308530 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308535 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308540 | orchestrator | 2025-07-06 20:17:14.308546 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:17:14.308551 | orchestrator | Sunday 06 July 2025 20:09:55 +0000 (0:00:00.327) 0:03:48.782 *********** 2025-07-06 20:17:14.308556 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308561 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308567 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308572 | orchestrator | 2025-07-06 20:17:14.308581 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:17:14.308586 | orchestrator | Sunday 06 July 2025 20:09:55 +0000 (0:00:00.295) 0:03:49.077 *********** 2025-07-06 20:17:14.308592 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308597 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308602 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.308608 | orchestrator | 2025-07-06 20:17:14.308613 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:17:14.308618 | orchestrator | Sunday 06 July 2025 20:09:56 +0000 (0:00:01.000) 0:03:50.077 *********** 2025-07-06 20:17:14.308624 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308629 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308634 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.308639 | orchestrator | 2025-07-06 20:17:14.308645 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:17:14.308650 | orchestrator | Sunday 06 July 2025 20:09:57 +0000 (0:00:00.804) 0:03:50.882 *********** 2025-07-06 20:17:14.308655 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308661 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308666 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308671 | orchestrator | 2025-07-06 20:17:14.308677 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:17:14.308682 | orchestrator | Sunday 06 July 2025 20:09:57 +0000 (0:00:00.302) 0:03:51.185 *********** 2025-07-06 20:17:14.308688 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308693 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308698 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.308703 | orchestrator | 2025-07-06 20:17:14.308709 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:17:14.308714 | orchestrator | Sunday 06 July 2025 20:09:58 +0000 (0:00:00.306) 0:03:51.491 *********** 2025-07-06 20:17:14.308719 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308725 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308730 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308735 | orchestrator | 2025-07-06 20:17:14.308747 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:17:14.308752 | orchestrator | Sunday 06 July 2025 20:09:58 +0000 (0:00:00.566) 0:03:52.057 *********** 2025-07-06 20:17:14.308758 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308763 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308768 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308774 | orchestrator | 2025-07-06 20:17:14.308779 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:17:14.308784 | orchestrator | Sunday 06 July 2025 20:09:59 +0000 (0:00:00.313) 0:03:52.371 *********** 2025-07-06 20:17:14.308790 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308795 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308800 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308805 | orchestrator | 2025-07-06 20:17:14.308811 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:17:14.308816 | orchestrator | Sunday 06 July 2025 20:09:59 +0000 (0:00:00.290) 0:03:52.662 *********** 2025-07-06 20:17:14.308822 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308827 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308832 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308837 | orchestrator | 2025-07-06 20:17:14.308843 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:17:14.308848 | orchestrator | Sunday 06 July 2025 20:09:59 +0000 (0:00:00.308) 0:03:52.971 *********** 2025-07-06 20:17:14.308853 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.308859 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.308864 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.308869 | orchestrator | 2025-07-06 20:17:14.308875 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:17:14.308880 | orchestrator | Sunday 06 July 2025 20:10:00 +0000 (0:00:00.516) 0:03:53.488 *********** 2025-07-06 20:17:14.308885 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308890 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308896 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.308901 | orchestrator | 2025-07-06 20:17:14.308906 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:17:14.308912 | orchestrator | Sunday 06 July 2025 20:10:00 +0000 (0:00:00.369) 0:03:53.857 *********** 2025-07-06 20:17:14.308917 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308922 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308928 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.308933 | orchestrator | 2025-07-06 20:17:14.308941 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:17:14.308946 | orchestrator | Sunday 06 July 2025 20:10:00 +0000 (0:00:00.380) 0:03:54.238 *********** 2025-07-06 20:17:14.308952 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308957 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308962 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.308968 | orchestrator | 2025-07-06 20:17:14.308973 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-07-06 20:17:14.308978 | orchestrator | Sunday 06 July 2025 20:10:01 +0000 (0:00:00.783) 0:03:55.021 *********** 2025-07-06 20:17:14.308984 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.308989 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.308994 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.309000 | orchestrator | 2025-07-06 20:17:14.309005 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-07-06 20:17:14.309010 | orchestrator | Sunday 06 July 2025 20:10:02 +0000 (0:00:00.455) 0:03:55.477 *********** 2025-07-06 20:17:14.309016 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.309021 | orchestrator | 2025-07-06 20:17:14.309026 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-07-06 20:17:14.309032 | orchestrator | Sunday 06 July 2025 20:10:02 +0000 (0:00:00.609) 0:03:56.086 *********** 2025-07-06 20:17:14.309043 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.309048 | orchestrator | 2025-07-06 20:17:14.309054 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-07-06 20:17:14.309062 | orchestrator | Sunday 06 July 2025 20:10:02 +0000 (0:00:00.169) 0:03:56.256 *********** 2025-07-06 20:17:14.309067 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-06 20:17:14.309073 | orchestrator | 2025-07-06 20:17:14.309078 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-07-06 20:17:14.309084 | orchestrator | Sunday 06 July 2025 20:10:04 +0000 (0:00:01.512) 0:03:57.768 *********** 2025-07-06 20:17:14.309089 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.309094 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.309100 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.309105 | orchestrator | 2025-07-06 20:17:14.309110 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-07-06 20:17:14.309116 | orchestrator | Sunday 06 July 2025 20:10:04 +0000 (0:00:00.344) 0:03:58.112 *********** 2025-07-06 20:17:14.309121 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.309126 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.309132 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.309137 | orchestrator | 2025-07-06 20:17:14.309142 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-07-06 20:17:14.309148 | orchestrator | Sunday 06 July 2025 20:10:05 +0000 (0:00:00.325) 0:03:58.438 *********** 2025-07-06 20:17:14.309153 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309158 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.309164 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.309187 | orchestrator | 2025-07-06 20:17:14.309194 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-07-06 20:17:14.309199 | orchestrator | Sunday 06 July 2025 20:10:06 +0000 (0:00:01.200) 0:03:59.639 *********** 2025-07-06 20:17:14.309204 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309209 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.309215 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.309220 | orchestrator | 2025-07-06 20:17:14.309225 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-07-06 20:17:14.309230 | orchestrator | Sunday 06 July 2025 20:10:07 +0000 (0:00:01.017) 0:04:00.656 *********** 2025-07-06 20:17:14.309236 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309245 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.309254 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.309262 | orchestrator | 2025-07-06 20:17:14.309271 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-07-06 20:17:14.309279 | orchestrator | Sunday 06 July 2025 20:10:07 +0000 (0:00:00.678) 0:04:01.335 *********** 2025-07-06 20:17:14.309288 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.309297 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.309307 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.309313 | orchestrator | 2025-07-06 20:17:14.309319 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-07-06 20:17:14.309324 | orchestrator | Sunday 06 July 2025 20:10:08 +0000 (0:00:00.813) 0:04:02.148 *********** 2025-07-06 20:17:14.309329 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309335 | orchestrator | 2025-07-06 20:17:14.309340 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-07-06 20:17:14.309345 | orchestrator | Sunday 06 July 2025 20:10:10 +0000 (0:00:01.246) 0:04:03.394 *********** 2025-07-06 20:17:14.309351 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.309356 | orchestrator | 2025-07-06 20:17:14.309361 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-07-06 20:17:14.309367 | orchestrator | Sunday 06 July 2025 20:10:10 +0000 (0:00:00.693) 0:04:04.088 *********** 2025-07-06 20:17:14.309372 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:17:14.309377 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.309388 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.309393 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:17:14.309398 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-07-06 20:17:14.309404 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:17:14.309409 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:17:14.309415 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-07-06 20:17:14.309420 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:17:14.309425 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-07-06 20:17:14.309431 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-07-06 20:17:14.309440 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-07-06 20:17:14.309445 | orchestrator | 2025-07-06 20:17:14.309450 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-07-06 20:17:14.309456 | orchestrator | Sunday 06 July 2025 20:10:13 +0000 (0:00:03.270) 0:04:07.358 *********** 2025-07-06 20:17:14.309461 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309466 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.309472 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.309477 | orchestrator | 2025-07-06 20:17:14.309482 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-07-06 20:17:14.309488 | orchestrator | Sunday 06 July 2025 20:10:15 +0000 (0:00:01.426) 0:04:08.785 *********** 2025-07-06 20:17:14.309493 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.309498 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.309504 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.309509 | orchestrator | 2025-07-06 20:17:14.309514 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-07-06 20:17:14.309520 | orchestrator | Sunday 06 July 2025 20:10:15 +0000 (0:00:00.318) 0:04:09.103 *********** 2025-07-06 20:17:14.309525 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.309531 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.309536 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.309541 | orchestrator | 2025-07-06 20:17:14.309546 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-07-06 20:17:14.309552 | orchestrator | Sunday 06 July 2025 20:10:16 +0000 (0:00:00.341) 0:04:09.445 *********** 2025-07-06 20:17:14.309557 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309563 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.309568 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.309573 | orchestrator | 2025-07-06 20:17:14.309582 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-07-06 20:17:14.309588 | orchestrator | Sunday 06 July 2025 20:10:17 +0000 (0:00:01.735) 0:04:11.181 *********** 2025-07-06 20:17:14.309593 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309599 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.309604 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.309609 | orchestrator | 2025-07-06 20:17:14.309615 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-07-06 20:17:14.309620 | orchestrator | Sunday 06 July 2025 20:10:19 +0000 (0:00:01.529) 0:04:12.710 *********** 2025-07-06 20:17:14.309626 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.309631 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.309636 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.309641 | orchestrator | 2025-07-06 20:17:14.309647 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-07-06 20:17:14.309652 | orchestrator | Sunday 06 July 2025 20:10:19 +0000 (0:00:00.343) 0:04:13.054 *********** 2025-07-06 20:17:14.309657 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.309663 | orchestrator | 2025-07-06 20:17:14.309675 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-07-06 20:17:14.309680 | orchestrator | Sunday 06 July 2025 20:10:20 +0000 (0:00:00.510) 0:04:13.565 *********** 2025-07-06 20:17:14.309685 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.309691 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.309696 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.309701 | orchestrator | 2025-07-06 20:17:14.309707 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-07-06 20:17:14.309712 | orchestrator | Sunday 06 July 2025 20:10:20 +0000 (0:00:00.552) 0:04:14.117 *********** 2025-07-06 20:17:14.309718 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.309723 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.309728 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.309734 | orchestrator | 2025-07-06 20:17:14.309739 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-07-06 20:17:14.309745 | orchestrator | Sunday 06 July 2025 20:10:21 +0000 (0:00:00.305) 0:04:14.423 *********** 2025-07-06 20:17:14.309750 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.309755 | orchestrator | 2025-07-06 20:17:14.309761 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-07-06 20:17:14.309766 | orchestrator | Sunday 06 July 2025 20:10:21 +0000 (0:00:00.510) 0:04:14.933 *********** 2025-07-06 20:17:14.309771 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.309777 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309782 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.309787 | orchestrator | 2025-07-06 20:17:14.309793 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-07-06 20:17:14.309798 | orchestrator | Sunday 06 July 2025 20:10:23 +0000 (0:00:02.010) 0:04:16.944 *********** 2025-07-06 20:17:14.309803 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309809 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.309814 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.309819 | orchestrator | 2025-07-06 20:17:14.309825 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-07-06 20:17:14.309830 | orchestrator | Sunday 06 July 2025 20:10:24 +0000 (0:00:01.218) 0:04:18.162 *********** 2025-07-06 20:17:14.309835 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309841 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.309846 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.309851 | orchestrator | 2025-07-06 20:17:14.309857 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-07-06 20:17:14.309862 | orchestrator | Sunday 06 July 2025 20:10:26 +0000 (0:00:01.719) 0:04:19.882 *********** 2025-07-06 20:17:14.309867 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.309873 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.309878 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.309883 | orchestrator | 2025-07-06 20:17:14.309889 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-07-06 20:17:14.309894 | orchestrator | Sunday 06 July 2025 20:10:28 +0000 (0:00:01.847) 0:04:21.729 *********** 2025-07-06 20:17:14.309903 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.309909 | orchestrator | 2025-07-06 20:17:14.309914 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-07-06 20:17:14.309919 | orchestrator | Sunday 06 July 2025 20:10:29 +0000 (0:00:00.833) 0:04:22.563 *********** 2025-07-06 20:17:14.309925 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-07-06 20:17:14.309930 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.309935 | orchestrator | 2025-07-06 20:17:14.309941 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-07-06 20:17:14.309946 | orchestrator | Sunday 06 July 2025 20:10:51 +0000 (0:00:21.911) 0:04:44.474 *********** 2025-07-06 20:17:14.309955 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.309961 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.309966 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.309971 | orchestrator | 2025-07-06 20:17:14.309977 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-07-06 20:17:14.309982 | orchestrator | Sunday 06 July 2025 20:11:00 +0000 (0:00:09.703) 0:04:54.177 *********** 2025-07-06 20:17:14.309987 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.309993 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.309998 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310003 | orchestrator | 2025-07-06 20:17:14.310009 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-07-06 20:17:14.310082 | orchestrator | Sunday 06 July 2025 20:11:01 +0000 (0:00:00.322) 0:04:54.500 *********** 2025-07-06 20:17:14.310110 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6ef71050493f4f1c3324aebf5ef9d774f59df4f8'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-07-06 20:17:14.310118 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6ef71050493f4f1c3324aebf5ef9d774f59df4f8'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-07-06 20:17:14.310125 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6ef71050493f4f1c3324aebf5ef9d774f59df4f8'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-07-06 20:17:14.310132 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6ef71050493f4f1c3324aebf5ef9d774f59df4f8'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-07-06 20:17:14.310138 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6ef71050493f4f1c3324aebf5ef9d774f59df4f8'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-07-06 20:17:14.310144 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6ef71050493f4f1c3324aebf5ef9d774f59df4f8'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__6ef71050493f4f1c3324aebf5ef9d774f59df4f8'}])  2025-07-06 20:17:14.310151 | orchestrator | 2025-07-06 20:17:14.310156 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:17:14.310162 | orchestrator | Sunday 06 July 2025 20:11:16 +0000 (0:00:14.876) 0:05:09.376 *********** 2025-07-06 20:17:14.310184 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310190 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310195 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310201 | orchestrator | 2025-07-06 20:17:14.310206 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-06 20:17:14.310211 | orchestrator | Sunday 06 July 2025 20:11:16 +0000 (0:00:00.345) 0:05:09.721 *********** 2025-07-06 20:17:14.310221 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.310227 | orchestrator | 2025-07-06 20:17:14.310232 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-06 20:17:14.310241 | orchestrator | Sunday 06 July 2025 20:11:17 +0000 (0:00:00.754) 0:05:10.476 *********** 2025-07-06 20:17:14.310246 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.310252 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.310257 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.310262 | orchestrator | 2025-07-06 20:17:14.310267 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-06 20:17:14.310273 | orchestrator | Sunday 06 July 2025 20:11:17 +0000 (0:00:00.328) 0:05:10.804 *********** 2025-07-06 20:17:14.310278 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310283 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310289 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310294 | orchestrator | 2025-07-06 20:17:14.310299 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-06 20:17:14.310304 | orchestrator | Sunday 06 July 2025 20:11:17 +0000 (0:00:00.316) 0:05:11.120 *********** 2025-07-06 20:17:14.310310 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 20:17:14.310315 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 20:17:14.310320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 20:17:14.310326 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310331 | orchestrator | 2025-07-06 20:17:14.310336 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-06 20:17:14.310342 | orchestrator | Sunday 06 July 2025 20:11:18 +0000 (0:00:00.840) 0:05:11.961 *********** 2025-07-06 20:17:14.310347 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.310352 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.310358 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.310363 | orchestrator | 2025-07-06 20:17:14.310384 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-07-06 20:17:14.310390 | orchestrator | 2025-07-06 20:17:14.310395 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:17:14.310401 | orchestrator | Sunday 06 July 2025 20:11:19 +0000 (0:00:00.799) 0:05:12.761 *********** 2025-07-06 20:17:14.310406 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.310412 | orchestrator | 2025-07-06 20:17:14.310417 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:17:14.310422 | orchestrator | Sunday 06 July 2025 20:11:19 +0000 (0:00:00.481) 0:05:13.242 *********** 2025-07-06 20:17:14.310428 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.310433 | orchestrator | 2025-07-06 20:17:14.310438 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:17:14.310444 | orchestrator | Sunday 06 July 2025 20:11:20 +0000 (0:00:00.769) 0:05:14.012 *********** 2025-07-06 20:17:14.310449 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.310454 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.310460 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.310465 | orchestrator | 2025-07-06 20:17:14.310471 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:17:14.310476 | orchestrator | Sunday 06 July 2025 20:11:21 +0000 (0:00:00.712) 0:05:14.725 *********** 2025-07-06 20:17:14.310481 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310486 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310492 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310497 | orchestrator | 2025-07-06 20:17:14.310502 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:17:14.310512 | orchestrator | Sunday 06 July 2025 20:11:21 +0000 (0:00:00.327) 0:05:15.052 *********** 2025-07-06 20:17:14.310517 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310523 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310528 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310533 | orchestrator | 2025-07-06 20:17:14.310539 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:17:14.310544 | orchestrator | Sunday 06 July 2025 20:11:22 +0000 (0:00:00.521) 0:05:15.574 *********** 2025-07-06 20:17:14.310549 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310555 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310560 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310565 | orchestrator | 2025-07-06 20:17:14.310570 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:17:14.310576 | orchestrator | Sunday 06 July 2025 20:11:22 +0000 (0:00:00.336) 0:05:15.911 *********** 2025-07-06 20:17:14.310581 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.310587 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.310592 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.310597 | orchestrator | 2025-07-06 20:17:14.310603 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:17:14.310608 | orchestrator | Sunday 06 July 2025 20:11:23 +0000 (0:00:00.650) 0:05:16.561 *********** 2025-07-06 20:17:14.310613 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310619 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310624 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310629 | orchestrator | 2025-07-06 20:17:14.310635 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:17:14.310640 | orchestrator | Sunday 06 July 2025 20:11:23 +0000 (0:00:00.302) 0:05:16.864 *********** 2025-07-06 20:17:14.310646 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310651 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310656 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310661 | orchestrator | 2025-07-06 20:17:14.310667 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:17:14.310672 | orchestrator | Sunday 06 July 2025 20:11:24 +0000 (0:00:00.579) 0:05:17.443 *********** 2025-07-06 20:17:14.310677 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.310683 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.310688 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.310693 | orchestrator | 2025-07-06 20:17:14.310699 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:17:14.310704 | orchestrator | Sunday 06 July 2025 20:11:24 +0000 (0:00:00.740) 0:05:18.184 *********** 2025-07-06 20:17:14.310710 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.310715 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.310720 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.310726 | orchestrator | 2025-07-06 20:17:14.310731 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:17:14.310736 | orchestrator | Sunday 06 July 2025 20:11:25 +0000 (0:00:00.765) 0:05:18.949 *********** 2025-07-06 20:17:14.310742 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310747 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310752 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310758 | orchestrator | 2025-07-06 20:17:14.310782 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:17:14.310788 | orchestrator | Sunday 06 July 2025 20:11:25 +0000 (0:00:00.346) 0:05:19.296 *********** 2025-07-06 20:17:14.310793 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.310798 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.310804 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.310809 | orchestrator | 2025-07-06 20:17:14.310815 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:17:14.310820 | orchestrator | Sunday 06 July 2025 20:11:26 +0000 (0:00:00.595) 0:05:19.891 *********** 2025-07-06 20:17:14.310829 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310834 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310840 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310845 | orchestrator | 2025-07-06 20:17:14.310850 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:17:14.310856 | orchestrator | Sunday 06 July 2025 20:11:26 +0000 (0:00:00.353) 0:05:20.245 *********** 2025-07-06 20:17:14.310861 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310866 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310885 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310891 | orchestrator | 2025-07-06 20:17:14.310897 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:17:14.310902 | orchestrator | Sunday 06 July 2025 20:11:27 +0000 (0:00:00.315) 0:05:20.560 *********** 2025-07-06 20:17:14.310908 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310913 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310919 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310924 | orchestrator | 2025-07-06 20:17:14.310929 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:17:14.310934 | orchestrator | Sunday 06 July 2025 20:11:27 +0000 (0:00:00.312) 0:05:20.873 *********** 2025-07-06 20:17:14.310940 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310945 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310950 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310956 | orchestrator | 2025-07-06 20:17:14.310961 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:17:14.310966 | orchestrator | Sunday 06 July 2025 20:11:28 +0000 (0:00:00.666) 0:05:21.540 *********** 2025-07-06 20:17:14.310972 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.310977 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.310982 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.310988 | orchestrator | 2025-07-06 20:17:14.310993 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:17:14.310998 | orchestrator | Sunday 06 July 2025 20:11:28 +0000 (0:00:00.293) 0:05:21.833 *********** 2025-07-06 20:17:14.311004 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.311009 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.311014 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.311020 | orchestrator | 2025-07-06 20:17:14.311025 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:17:14.311030 | orchestrator | Sunday 06 July 2025 20:11:28 +0000 (0:00:00.321) 0:05:22.154 *********** 2025-07-06 20:17:14.311036 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.311041 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.311047 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.311052 | orchestrator | 2025-07-06 20:17:14.311057 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:17:14.311063 | orchestrator | Sunday 06 July 2025 20:11:29 +0000 (0:00:00.352) 0:05:22.507 *********** 2025-07-06 20:17:14.311068 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.311073 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.311079 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.311084 | orchestrator | 2025-07-06 20:17:14.311090 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-07-06 20:17:14.311095 | orchestrator | Sunday 06 July 2025 20:11:29 +0000 (0:00:00.769) 0:05:23.276 *********** 2025-07-06 20:17:14.311100 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-06 20:17:14.311106 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:17:14.311112 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:17:14.311117 | orchestrator | 2025-07-06 20:17:14.311122 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-07-06 20:17:14.311128 | orchestrator | Sunday 06 July 2025 20:11:30 +0000 (0:00:00.630) 0:05:23.907 *********** 2025-07-06 20:17:14.311137 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.311142 | orchestrator | 2025-07-06 20:17:14.311148 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-07-06 20:17:14.311153 | orchestrator | Sunday 06 July 2025 20:11:31 +0000 (0:00:00.545) 0:05:24.452 *********** 2025-07-06 20:17:14.311159 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.311164 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.311206 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.311213 | orchestrator | 2025-07-06 20:17:14.311218 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-07-06 20:17:14.311224 | orchestrator | Sunday 06 July 2025 20:11:32 +0000 (0:00:00.951) 0:05:25.404 *********** 2025-07-06 20:17:14.311229 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.311234 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.311240 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.311245 | orchestrator | 2025-07-06 20:17:14.311250 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-07-06 20:17:14.311259 | orchestrator | Sunday 06 July 2025 20:11:32 +0000 (0:00:00.313) 0:05:25.718 *********** 2025-07-06 20:17:14.311265 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:17:14.311270 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:17:14.311275 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:17:14.311281 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-07-06 20:17:14.311286 | orchestrator | 2025-07-06 20:17:14.311292 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-07-06 20:17:14.311297 | orchestrator | Sunday 06 July 2025 20:11:42 +0000 (0:00:10.307) 0:05:36.025 *********** 2025-07-06 20:17:14.311302 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.311308 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.311313 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.311318 | orchestrator | 2025-07-06 20:17:14.311324 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-07-06 20:17:14.311329 | orchestrator | Sunday 06 July 2025 20:11:42 +0000 (0:00:00.333) 0:05:36.359 *********** 2025-07-06 20:17:14.311334 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-06 20:17:14.311340 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-06 20:17:14.311345 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-06 20:17:14.311351 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-06 20:17:14.311356 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.311361 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.311367 | orchestrator | 2025-07-06 20:17:14.311388 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-07-06 20:17:14.311394 | orchestrator | Sunday 06 July 2025 20:11:45 +0000 (0:00:02.427) 0:05:38.786 *********** 2025-07-06 20:17:14.311400 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-06 20:17:14.311405 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-06 20:17:14.311411 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-06 20:17:14.311416 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:17:14.311422 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-06 20:17:14.311427 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-06 20:17:14.311432 | orchestrator | 2025-07-06 20:17:14.311438 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-07-06 20:17:14.311443 | orchestrator | Sunday 06 July 2025 20:11:46 +0000 (0:00:01.532) 0:05:40.318 *********** 2025-07-06 20:17:14.311449 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.311454 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.311459 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.311472 | orchestrator | 2025-07-06 20:17:14.311477 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-07-06 20:17:14.311483 | orchestrator | Sunday 06 July 2025 20:11:47 +0000 (0:00:00.721) 0:05:41.040 *********** 2025-07-06 20:17:14.311488 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.311493 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.311499 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.311504 | orchestrator | 2025-07-06 20:17:14.311509 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-07-06 20:17:14.311515 | orchestrator | Sunday 06 July 2025 20:11:47 +0000 (0:00:00.322) 0:05:41.363 *********** 2025-07-06 20:17:14.311520 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.311525 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.311531 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.311536 | orchestrator | 2025-07-06 20:17:14.311541 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-07-06 20:17:14.311547 | orchestrator | Sunday 06 July 2025 20:11:48 +0000 (0:00:00.321) 0:05:41.684 *********** 2025-07-06 20:17:14.311552 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.311558 | orchestrator | 2025-07-06 20:17:14.311563 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-07-06 20:17:14.311569 | orchestrator | Sunday 06 July 2025 20:11:49 +0000 (0:00:00.764) 0:05:42.448 *********** 2025-07-06 20:17:14.311574 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.311579 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.311585 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.311590 | orchestrator | 2025-07-06 20:17:14.311595 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-07-06 20:17:14.311601 | orchestrator | Sunday 06 July 2025 20:11:49 +0000 (0:00:00.310) 0:05:42.759 *********** 2025-07-06 20:17:14.311606 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.311612 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.311617 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.311622 | orchestrator | 2025-07-06 20:17:14.311628 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-07-06 20:17:14.311633 | orchestrator | Sunday 06 July 2025 20:11:49 +0000 (0:00:00.307) 0:05:43.067 *********** 2025-07-06 20:17:14.311639 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.311644 | orchestrator | 2025-07-06 20:17:14.311649 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-07-06 20:17:14.311655 | orchestrator | Sunday 06 July 2025 20:11:50 +0000 (0:00:00.854) 0:05:43.921 *********** 2025-07-06 20:17:14.311660 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.311665 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.311671 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.311676 | orchestrator | 2025-07-06 20:17:14.311682 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-07-06 20:17:14.311687 | orchestrator | Sunday 06 July 2025 20:11:51 +0000 (0:00:01.176) 0:05:45.098 *********** 2025-07-06 20:17:14.311692 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.311698 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.311703 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.311708 | orchestrator | 2025-07-06 20:17:14.311714 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-07-06 20:17:14.311722 | orchestrator | Sunday 06 July 2025 20:11:52 +0000 (0:00:01.210) 0:05:46.308 *********** 2025-07-06 20:17:14.311728 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.311733 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.311738 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.311744 | orchestrator | 2025-07-06 20:17:14.311749 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-07-06 20:17:14.311759 | orchestrator | Sunday 06 July 2025 20:11:55 +0000 (0:00:02.283) 0:05:48.592 *********** 2025-07-06 20:17:14.311765 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.311770 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.311775 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.311781 | orchestrator | 2025-07-06 20:17:14.311786 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-07-06 20:17:14.311791 | orchestrator | Sunday 06 July 2025 20:11:57 +0000 (0:00:02.033) 0:05:50.625 *********** 2025-07-06 20:17:14.311796 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.311801 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.311806 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-07-06 20:17:14.311810 | orchestrator | 2025-07-06 20:17:14.311815 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-07-06 20:17:14.311820 | orchestrator | Sunday 06 July 2025 20:11:57 +0000 (0:00:00.407) 0:05:51.033 *********** 2025-07-06 20:17:14.311825 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-07-06 20:17:14.311842 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-07-06 20:17:14.311847 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-07-06 20:17:14.311852 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-07-06 20:17:14.311857 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-07-06 20:17:14.311862 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:17:14.311866 | orchestrator | 2025-07-06 20:17:14.311871 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-07-06 20:17:14.311876 | orchestrator | Sunday 06 July 2025 20:12:27 +0000 (0:00:30.255) 0:06:21.289 *********** 2025-07-06 20:17:14.311881 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:17:14.311886 | orchestrator | 2025-07-06 20:17:14.311890 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-07-06 20:17:14.311895 | orchestrator | Sunday 06 July 2025 20:12:29 +0000 (0:00:01.561) 0:06:22.850 *********** 2025-07-06 20:17:14.311900 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.311905 | orchestrator | 2025-07-06 20:17:14.311910 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-07-06 20:17:14.311914 | orchestrator | Sunday 06 July 2025 20:12:30 +0000 (0:00:00.848) 0:06:23.698 *********** 2025-07-06 20:17:14.311919 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.311924 | orchestrator | 2025-07-06 20:17:14.311929 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-07-06 20:17:14.311933 | orchestrator | Sunday 06 July 2025 20:12:30 +0000 (0:00:00.163) 0:06:23.862 *********** 2025-07-06 20:17:14.311938 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-07-06 20:17:14.311943 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-07-06 20:17:14.311948 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-07-06 20:17:14.311953 | orchestrator | 2025-07-06 20:17:14.311958 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-07-06 20:17:14.311962 | orchestrator | Sunday 06 July 2025 20:12:36 +0000 (0:00:06.305) 0:06:30.168 *********** 2025-07-06 20:17:14.311967 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-07-06 20:17:14.311972 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-07-06 20:17:14.311977 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-07-06 20:17:14.311982 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-07-06 20:17:14.311987 | orchestrator | 2025-07-06 20:17:14.311996 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:17:14.312001 | orchestrator | Sunday 06 July 2025 20:12:41 +0000 (0:00:04.642) 0:06:34.810 *********** 2025-07-06 20:17:14.312006 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.312010 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.312015 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.312020 | orchestrator | 2025-07-06 20:17:14.312025 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-06 20:17:14.312029 | orchestrator | Sunday 06 July 2025 20:12:42 +0000 (0:00:00.951) 0:06:35.762 *********** 2025-07-06 20:17:14.312034 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.312039 | orchestrator | 2025-07-06 20:17:14.312044 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-06 20:17:14.312049 | orchestrator | Sunday 06 July 2025 20:12:42 +0000 (0:00:00.548) 0:06:36.310 *********** 2025-07-06 20:17:14.312054 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.312063 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.312071 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.312078 | orchestrator | 2025-07-06 20:17:14.312085 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-06 20:17:14.312093 | orchestrator | Sunday 06 July 2025 20:12:43 +0000 (0:00:00.318) 0:06:36.628 *********** 2025-07-06 20:17:14.312104 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.312112 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.312119 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.312127 | orchestrator | 2025-07-06 20:17:14.312134 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-06 20:17:14.312142 | orchestrator | Sunday 06 July 2025 20:12:44 +0000 (0:00:01.715) 0:06:38.344 *********** 2025-07-06 20:17:14.312149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-06 20:17:14.312157 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-06 20:17:14.312165 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-06 20:17:14.312187 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.312195 | orchestrator | 2025-07-06 20:17:14.312202 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-06 20:17:14.312210 | orchestrator | Sunday 06 July 2025 20:12:45 +0000 (0:00:00.632) 0:06:38.977 *********** 2025-07-06 20:17:14.312218 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.312225 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.312233 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.312240 | orchestrator | 2025-07-06 20:17:14.312249 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-07-06 20:17:14.312256 | orchestrator | 2025-07-06 20:17:14.312263 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:17:14.312271 | orchestrator | Sunday 06 July 2025 20:12:46 +0000 (0:00:00.541) 0:06:39.518 *********** 2025-07-06 20:17:14.312278 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.312287 | orchestrator | 2025-07-06 20:17:14.312317 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:17:14.312326 | orchestrator | Sunday 06 July 2025 20:12:46 +0000 (0:00:00.684) 0:06:40.203 *********** 2025-07-06 20:17:14.312331 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.312336 | orchestrator | 2025-07-06 20:17:14.312341 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:17:14.312346 | orchestrator | Sunday 06 July 2025 20:12:47 +0000 (0:00:00.511) 0:06:40.714 *********** 2025-07-06 20:17:14.312351 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.312356 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.312366 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.312371 | orchestrator | 2025-07-06 20:17:14.312376 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:17:14.312380 | orchestrator | Sunday 06 July 2025 20:12:47 +0000 (0:00:00.295) 0:06:41.009 *********** 2025-07-06 20:17:14.312385 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312390 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312395 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312399 | orchestrator | 2025-07-06 20:17:14.312404 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:17:14.312409 | orchestrator | Sunday 06 July 2025 20:12:48 +0000 (0:00:00.978) 0:06:41.988 *********** 2025-07-06 20:17:14.312414 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312418 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312423 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312428 | orchestrator | 2025-07-06 20:17:14.312432 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:17:14.312437 | orchestrator | Sunday 06 July 2025 20:12:49 +0000 (0:00:00.810) 0:06:42.799 *********** 2025-07-06 20:17:14.312442 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312447 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312451 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312456 | orchestrator | 2025-07-06 20:17:14.312461 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:17:14.312465 | orchestrator | Sunday 06 July 2025 20:12:50 +0000 (0:00:00.724) 0:06:43.524 *********** 2025-07-06 20:17:14.312470 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.312475 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.312480 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.312484 | orchestrator | 2025-07-06 20:17:14.312489 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:17:14.312494 | orchestrator | Sunday 06 July 2025 20:12:50 +0000 (0:00:00.311) 0:06:43.835 *********** 2025-07-06 20:17:14.312499 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.312503 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.312508 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.312513 | orchestrator | 2025-07-06 20:17:14.312517 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:17:14.312522 | orchestrator | Sunday 06 July 2025 20:12:51 +0000 (0:00:00.568) 0:06:44.404 *********** 2025-07-06 20:17:14.312527 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.312532 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.312536 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.312541 | orchestrator | 2025-07-06 20:17:14.312546 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:17:14.312551 | orchestrator | Sunday 06 July 2025 20:12:51 +0000 (0:00:00.295) 0:06:44.700 *********** 2025-07-06 20:17:14.312555 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312560 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312565 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312570 | orchestrator | 2025-07-06 20:17:14.312574 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:17:14.312579 | orchestrator | Sunday 06 July 2025 20:12:52 +0000 (0:00:00.707) 0:06:45.408 *********** 2025-07-06 20:17:14.312584 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312588 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312593 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312598 | orchestrator | 2025-07-06 20:17:14.312603 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:17:14.312607 | orchestrator | Sunday 06 July 2025 20:12:52 +0000 (0:00:00.655) 0:06:46.063 *********** 2025-07-06 20:17:14.312612 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.312617 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.312622 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.312626 | orchestrator | 2025-07-06 20:17:14.312635 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:17:14.312644 | orchestrator | Sunday 06 July 2025 20:12:53 +0000 (0:00:00.553) 0:06:46.617 *********** 2025-07-06 20:17:14.312649 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.312654 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.312659 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.312663 | orchestrator | 2025-07-06 20:17:14.312668 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:17:14.312673 | orchestrator | Sunday 06 July 2025 20:12:53 +0000 (0:00:00.303) 0:06:46.920 *********** 2025-07-06 20:17:14.312678 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312683 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312687 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312692 | orchestrator | 2025-07-06 20:17:14.312697 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:17:14.312702 | orchestrator | Sunday 06 July 2025 20:12:53 +0000 (0:00:00.305) 0:06:47.226 *********** 2025-07-06 20:17:14.312706 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312711 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312716 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312720 | orchestrator | 2025-07-06 20:17:14.312725 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:17:14.312730 | orchestrator | Sunday 06 July 2025 20:12:54 +0000 (0:00:00.337) 0:06:47.564 *********** 2025-07-06 20:17:14.312734 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312739 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312744 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312748 | orchestrator | 2025-07-06 20:17:14.312756 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:17:14.312761 | orchestrator | Sunday 06 July 2025 20:12:54 +0000 (0:00:00.564) 0:06:48.129 *********** 2025-07-06 20:17:14.312766 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.312771 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.312775 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.312780 | orchestrator | 2025-07-06 20:17:14.312785 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:17:14.312790 | orchestrator | Sunday 06 July 2025 20:12:55 +0000 (0:00:00.316) 0:06:48.445 *********** 2025-07-06 20:17:14.312795 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.312799 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.312804 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.312809 | orchestrator | 2025-07-06 20:17:14.312814 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:17:14.312818 | orchestrator | Sunday 06 July 2025 20:12:55 +0000 (0:00:00.321) 0:06:48.767 *********** 2025-07-06 20:17:14.312823 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.312828 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.312833 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.312837 | orchestrator | 2025-07-06 20:17:14.312842 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:17:14.312847 | orchestrator | Sunday 06 July 2025 20:12:55 +0000 (0:00:00.309) 0:06:49.077 *********** 2025-07-06 20:17:14.312852 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312856 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312861 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312866 | orchestrator | 2025-07-06 20:17:14.312871 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:17:14.312875 | orchestrator | Sunday 06 July 2025 20:12:56 +0000 (0:00:00.613) 0:06:49.690 *********** 2025-07-06 20:17:14.312880 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312885 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312890 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312894 | orchestrator | 2025-07-06 20:17:14.312899 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-07-06 20:17:14.312904 | orchestrator | Sunday 06 July 2025 20:12:56 +0000 (0:00:00.580) 0:06:50.270 *********** 2025-07-06 20:17:14.312913 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.312918 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.312922 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.312927 | orchestrator | 2025-07-06 20:17:14.312932 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-07-06 20:17:14.312937 | orchestrator | Sunday 06 July 2025 20:12:57 +0000 (0:00:00.311) 0:06:50.582 *********** 2025-07-06 20:17:14.312942 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:17:14.312947 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:17:14.312951 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:17:14.312956 | orchestrator | 2025-07-06 20:17:14.312961 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-07-06 20:17:14.312966 | orchestrator | Sunday 06 July 2025 20:12:58 +0000 (0:00:00.874) 0:06:51.457 *********** 2025-07-06 20:17:14.312971 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.312975 | orchestrator | 2025-07-06 20:17:14.312980 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-07-06 20:17:14.312985 | orchestrator | Sunday 06 July 2025 20:12:58 +0000 (0:00:00.759) 0:06:52.217 *********** 2025-07-06 20:17:14.312990 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.312995 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.312999 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.313004 | orchestrator | 2025-07-06 20:17:14.313009 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-07-06 20:17:14.313014 | orchestrator | Sunday 06 July 2025 20:12:59 +0000 (0:00:00.316) 0:06:52.533 *********** 2025-07-06 20:17:14.313018 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.313023 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.313028 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.313033 | orchestrator | 2025-07-06 20:17:14.313037 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-07-06 20:17:14.313042 | orchestrator | Sunday 06 July 2025 20:12:59 +0000 (0:00:00.295) 0:06:52.828 *********** 2025-07-06 20:17:14.313050 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.313055 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.313059 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.313064 | orchestrator | 2025-07-06 20:17:14.313069 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-07-06 20:17:14.313074 | orchestrator | Sunday 06 July 2025 20:13:00 +0000 (0:00:00.907) 0:06:53.736 *********** 2025-07-06 20:17:14.313078 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.313083 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.313088 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.313093 | orchestrator | 2025-07-06 20:17:14.313097 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-07-06 20:17:14.313102 | orchestrator | Sunday 06 July 2025 20:13:00 +0000 (0:00:00.354) 0:06:54.090 *********** 2025-07-06 20:17:14.313107 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-06 20:17:14.313112 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-06 20:17:14.313117 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-06 20:17:14.313121 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-06 20:17:14.313126 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-06 20:17:14.313131 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-06 20:17:14.313140 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-06 20:17:14.313150 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-06 20:17:14.313155 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-06 20:17:14.313160 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-06 20:17:14.313164 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-06 20:17:14.313185 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-06 20:17:14.313190 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-06 20:17:14.313194 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-06 20:17:14.313199 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-06 20:17:14.313204 | orchestrator | 2025-07-06 20:17:14.313209 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-07-06 20:17:14.313214 | orchestrator | Sunday 06 July 2025 20:13:03 +0000 (0:00:03.065) 0:06:57.156 *********** 2025-07-06 20:17:14.313219 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.313224 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.313228 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.313233 | orchestrator | 2025-07-06 20:17:14.313238 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-07-06 20:17:14.313243 | orchestrator | Sunday 06 July 2025 20:13:04 +0000 (0:00:00.307) 0:06:57.463 *********** 2025-07-06 20:17:14.313248 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.313252 | orchestrator | 2025-07-06 20:17:14.313257 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-07-06 20:17:14.313262 | orchestrator | Sunday 06 July 2025 20:13:04 +0000 (0:00:00.737) 0:06:58.201 *********** 2025-07-06 20:17:14.313267 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-06 20:17:14.313271 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-06 20:17:14.313276 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-06 20:17:14.313281 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-07-06 20:17:14.313286 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-07-06 20:17:14.313291 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-07-06 20:17:14.313295 | orchestrator | 2025-07-06 20:17:14.313300 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-07-06 20:17:14.313305 | orchestrator | Sunday 06 July 2025 20:13:05 +0000 (0:00:01.040) 0:06:59.242 *********** 2025-07-06 20:17:14.313310 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.313315 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:17:14.313319 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:17:14.313324 | orchestrator | 2025-07-06 20:17:14.313329 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-07-06 20:17:14.313334 | orchestrator | Sunday 06 July 2025 20:13:08 +0000 (0:00:02.169) 0:07:01.411 *********** 2025-07-06 20:17:14.313339 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:17:14.313343 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:17:14.313348 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.313353 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:17:14.313358 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-06 20:17:14.313362 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.313367 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:17:14.313372 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-06 20:17:14.313380 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.313385 | orchestrator | 2025-07-06 20:17:14.313390 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-07-06 20:17:14.313398 | orchestrator | Sunday 06 July 2025 20:13:09 +0000 (0:00:01.483) 0:07:02.895 *********** 2025-07-06 20:17:14.313403 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:17:14.313407 | orchestrator | 2025-07-06 20:17:14.313412 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-07-06 20:17:14.313417 | orchestrator | Sunday 06 July 2025 20:13:11 +0000 (0:00:02.201) 0:07:05.096 *********** 2025-07-06 20:17:14.313422 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.313427 | orchestrator | 2025-07-06 20:17:14.313432 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-07-06 20:17:14.313436 | orchestrator | Sunday 06 July 2025 20:13:12 +0000 (0:00:00.558) 0:07:05.654 *********** 2025-07-06 20:17:14.313441 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09', 'data_vg': 'ceph-22d6bcb2-409c-5bf5-80b4-f4dcfc8f2a09'}) 2025-07-06 20:17:14.313447 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-31ad454b-c5b7-54ad-acab-5839a456146b', 'data_vg': 'ceph-31ad454b-c5b7-54ad-acab-5839a456146b'}) 2025-07-06 20:17:14.313451 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fc1251bd-e592-50b3-b197-385f411a7339', 'data_vg': 'ceph-fc1251bd-e592-50b3-b197-385f411a7339'}) 2025-07-06 20:17:14.313459 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15', 'data_vg': 'ceph-1256d0fb-e60f-50ff-afd8-4edc5f2c0a15'}) 2025-07-06 20:17:14.313464 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2eb0e424-9f58-550c-b8cf-76c1b52e517a', 'data_vg': 'ceph-2eb0e424-9f58-550c-b8cf-76c1b52e517a'}) 2025-07-06 20:17:14.313469 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b5f0fce0-432f-57fb-bebd-426658f60987', 'data_vg': 'ceph-b5f0fce0-432f-57fb-bebd-426658f60987'}) 2025-07-06 20:17:14.313474 | orchestrator | 2025-07-06 20:17:14.313479 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-07-06 20:17:14.313484 | orchestrator | Sunday 06 July 2025 20:13:55 +0000 (0:00:43.273) 0:07:48.928 *********** 2025-07-06 20:17:14.313488 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.313493 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.313498 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.313503 | orchestrator | 2025-07-06 20:17:14.313508 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-07-06 20:17:14.313512 | orchestrator | Sunday 06 July 2025 20:13:56 +0000 (0:00:00.574) 0:07:49.502 *********** 2025-07-06 20:17:14.313517 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.313522 | orchestrator | 2025-07-06 20:17:14.313527 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-07-06 20:17:14.313532 | orchestrator | Sunday 06 July 2025 20:13:56 +0000 (0:00:00.522) 0:07:50.025 *********** 2025-07-06 20:17:14.313536 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.313541 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.313546 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.313551 | orchestrator | 2025-07-06 20:17:14.313555 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-07-06 20:17:14.313560 | orchestrator | Sunday 06 July 2025 20:13:57 +0000 (0:00:00.644) 0:07:50.669 *********** 2025-07-06 20:17:14.313565 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.313570 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.313574 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.313579 | orchestrator | 2025-07-06 20:17:14.313584 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-07-06 20:17:14.313589 | orchestrator | Sunday 06 July 2025 20:14:00 +0000 (0:00:02.810) 0:07:53.479 *********** 2025-07-06 20:17:14.313597 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.313601 | orchestrator | 2025-07-06 20:17:14.313606 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-07-06 20:17:14.313611 | orchestrator | Sunday 06 July 2025 20:14:00 +0000 (0:00:00.538) 0:07:54.018 *********** 2025-07-06 20:17:14.313616 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.313621 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.313625 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.313630 | orchestrator | 2025-07-06 20:17:14.313635 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-07-06 20:17:14.313640 | orchestrator | Sunday 06 July 2025 20:14:01 +0000 (0:00:01.149) 0:07:55.168 *********** 2025-07-06 20:17:14.313645 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.313649 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.313654 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.313659 | orchestrator | 2025-07-06 20:17:14.313664 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-07-06 20:17:14.313669 | orchestrator | Sunday 06 July 2025 20:14:03 +0000 (0:00:01.348) 0:07:56.517 *********** 2025-07-06 20:17:14.313673 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.313678 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.313683 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.313687 | orchestrator | 2025-07-06 20:17:14.313692 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-07-06 20:17:14.313697 | orchestrator | Sunday 06 July 2025 20:14:04 +0000 (0:00:01.625) 0:07:58.142 *********** 2025-07-06 20:17:14.313702 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.313707 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.313711 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.313716 | orchestrator | 2025-07-06 20:17:14.313721 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-07-06 20:17:14.313729 | orchestrator | Sunday 06 July 2025 20:14:05 +0000 (0:00:00.330) 0:07:58.472 *********** 2025-07-06 20:17:14.313734 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.313738 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.313743 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.313748 | orchestrator | 2025-07-06 20:17:14.313753 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-07-06 20:17:14.313758 | orchestrator | Sunday 06 July 2025 20:14:05 +0000 (0:00:00.329) 0:07:58.802 *********** 2025-07-06 20:17:14.313762 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-06 20:17:14.313767 | orchestrator | ok: [testbed-node-4] => (item=4) 2025-07-06 20:17:14.313772 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-07-06 20:17:14.313777 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-07-06 20:17:14.313781 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-07-06 20:17:14.313786 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-07-06 20:17:14.313791 | orchestrator | 2025-07-06 20:17:14.313796 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-07-06 20:17:14.313801 | orchestrator | Sunday 06 July 2025 20:14:06 +0000 (0:00:01.312) 0:08:00.114 *********** 2025-07-06 20:17:14.313806 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-06 20:17:14.313810 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-06 20:17:14.313815 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-07-06 20:17:14.313820 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-07-06 20:17:14.313824 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-07-06 20:17:14.313829 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-07-06 20:17:14.313834 | orchestrator | 2025-07-06 20:17:14.313841 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-07-06 20:17:14.313846 | orchestrator | Sunday 06 July 2025 20:14:08 +0000 (0:00:02.133) 0:08:02.247 *********** 2025-07-06 20:17:14.313858 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-06 20:17:14.313862 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-07-06 20:17:14.313867 | orchestrator | changed: [testbed-node-4] => (item=4) 2025-07-06 20:17:14.313872 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-07-06 20:17:14.313877 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-07-06 20:17:14.313881 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-07-06 20:17:14.313886 | orchestrator | 2025-07-06 20:17:14.313891 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-07-06 20:17:14.313896 | orchestrator | Sunday 06 July 2025 20:14:12 +0000 (0:00:03.452) 0:08:05.700 *********** 2025-07-06 20:17:14.313900 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.313905 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.313910 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:17:14.313915 | orchestrator | 2025-07-06 20:17:14.313919 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-07-06 20:17:14.313924 | orchestrator | Sunday 06 July 2025 20:14:15 +0000 (0:00:02.946) 0:08:08.646 *********** 2025-07-06 20:17:14.313929 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.313934 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.313938 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-07-06 20:17:14.313943 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:17:14.313948 | orchestrator | 2025-07-06 20:17:14.313953 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-07-06 20:17:14.313957 | orchestrator | Sunday 06 July 2025 20:14:28 +0000 (0:00:13.258) 0:08:21.905 *********** 2025-07-06 20:17:14.313962 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.313967 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.313972 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.313976 | orchestrator | 2025-07-06 20:17:14.313981 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:17:14.313986 | orchestrator | Sunday 06 July 2025 20:14:29 +0000 (0:00:00.853) 0:08:22.758 *********** 2025-07-06 20:17:14.313991 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.313995 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.314000 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.314005 | orchestrator | 2025-07-06 20:17:14.314010 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-06 20:17:14.314040 | orchestrator | Sunday 06 July 2025 20:14:29 +0000 (0:00:00.573) 0:08:23.332 *********** 2025-07-06 20:17:14.314046 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.314051 | orchestrator | 2025-07-06 20:17:14.314056 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-06 20:17:14.314061 | orchestrator | Sunday 06 July 2025 20:14:30 +0000 (0:00:00.541) 0:08:23.873 *********** 2025-07-06 20:17:14.314066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.314070 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.314075 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.314080 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314085 | orchestrator | 2025-07-06 20:17:14.314090 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-06 20:17:14.314095 | orchestrator | Sunday 06 July 2025 20:14:30 +0000 (0:00:00.395) 0:08:24.269 *********** 2025-07-06 20:17:14.314099 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314104 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.314109 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.314114 | orchestrator | 2025-07-06 20:17:14.314118 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-06 20:17:14.314127 | orchestrator | Sunday 06 July 2025 20:14:31 +0000 (0:00:00.288) 0:08:24.558 *********** 2025-07-06 20:17:14.314132 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314137 | orchestrator | 2025-07-06 20:17:14.314142 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-06 20:17:14.314150 | orchestrator | Sunday 06 July 2025 20:14:31 +0000 (0:00:00.212) 0:08:24.770 *********** 2025-07-06 20:17:14.314155 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314159 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.314164 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.314222 | orchestrator | 2025-07-06 20:17:14.314228 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-06 20:17:14.314233 | orchestrator | Sunday 06 July 2025 20:14:31 +0000 (0:00:00.549) 0:08:25.320 *********** 2025-07-06 20:17:14.314238 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314242 | orchestrator | 2025-07-06 20:17:14.314247 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-06 20:17:14.314252 | orchestrator | Sunday 06 July 2025 20:14:32 +0000 (0:00:00.213) 0:08:25.533 *********** 2025-07-06 20:17:14.314257 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314261 | orchestrator | 2025-07-06 20:17:14.314266 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-06 20:17:14.314271 | orchestrator | Sunday 06 July 2025 20:14:32 +0000 (0:00:00.228) 0:08:25.762 *********** 2025-07-06 20:17:14.314276 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314281 | orchestrator | 2025-07-06 20:17:14.314285 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-06 20:17:14.314290 | orchestrator | Sunday 06 July 2025 20:14:32 +0000 (0:00:00.125) 0:08:25.887 *********** 2025-07-06 20:17:14.314295 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314300 | orchestrator | 2025-07-06 20:17:14.314304 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-06 20:17:14.314309 | orchestrator | Sunday 06 July 2025 20:14:32 +0000 (0:00:00.232) 0:08:26.120 *********** 2025-07-06 20:17:14.314318 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314323 | orchestrator | 2025-07-06 20:17:14.314328 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-06 20:17:14.314332 | orchestrator | Sunday 06 July 2025 20:14:32 +0000 (0:00:00.242) 0:08:26.362 *********** 2025-07-06 20:17:14.314337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.314342 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.314347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.314351 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314356 | orchestrator | 2025-07-06 20:17:14.314361 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-06 20:17:14.314366 | orchestrator | Sunday 06 July 2025 20:14:33 +0000 (0:00:00.384) 0:08:26.747 *********** 2025-07-06 20:17:14.314371 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314375 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.314380 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.314385 | orchestrator | 2025-07-06 20:17:14.314390 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-06 20:17:14.314395 | orchestrator | Sunday 06 July 2025 20:14:33 +0000 (0:00:00.323) 0:08:27.071 *********** 2025-07-06 20:17:14.314399 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314404 | orchestrator | 2025-07-06 20:17:14.314409 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-06 20:17:14.314414 | orchestrator | Sunday 06 July 2025 20:14:34 +0000 (0:00:00.867) 0:08:27.938 *********** 2025-07-06 20:17:14.314418 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314423 | orchestrator | 2025-07-06 20:17:14.314428 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-07-06 20:17:14.314433 | orchestrator | 2025-07-06 20:17:14.314437 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:17:14.314447 | orchestrator | Sunday 06 July 2025 20:14:35 +0000 (0:00:00.696) 0:08:28.634 *********** 2025-07-06 20:17:14.314452 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.314458 | orchestrator | 2025-07-06 20:17:14.314462 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:17:14.314467 | orchestrator | Sunday 06 July 2025 20:14:36 +0000 (0:00:01.432) 0:08:30.067 *********** 2025-07-06 20:17:14.314472 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.314477 | orchestrator | 2025-07-06 20:17:14.314482 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:17:14.314486 | orchestrator | Sunday 06 July 2025 20:14:38 +0000 (0:00:01.310) 0:08:31.377 *********** 2025-07-06 20:17:14.314491 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314496 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.314501 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.314505 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.314510 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.314515 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.314520 | orchestrator | 2025-07-06 20:17:14.314524 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:17:14.314529 | orchestrator | Sunday 06 July 2025 20:14:39 +0000 (0:00:01.379) 0:08:32.757 *********** 2025-07-06 20:17:14.314534 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.314539 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.314544 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.314548 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.314553 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.314558 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.314563 | orchestrator | 2025-07-06 20:17:14.314567 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:17:14.314572 | orchestrator | Sunday 06 July 2025 20:14:40 +0000 (0:00:00.723) 0:08:33.481 *********** 2025-07-06 20:17:14.314577 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.314582 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.314587 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.314591 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.314596 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.314604 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.314609 | orchestrator | 2025-07-06 20:17:14.314614 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:17:14.314619 | orchestrator | Sunday 06 July 2025 20:14:40 +0000 (0:00:00.850) 0:08:34.331 *********** 2025-07-06 20:17:14.314623 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.314628 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.314633 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.314638 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.314642 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.314647 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.314652 | orchestrator | 2025-07-06 20:17:14.314656 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:17:14.314661 | orchestrator | Sunday 06 July 2025 20:14:41 +0000 (0:00:00.720) 0:08:35.052 *********** 2025-07-06 20:17:14.314666 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314671 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.314676 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.314680 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.314685 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.314690 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.314695 | orchestrator | 2025-07-06 20:17:14.314700 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:17:14.314708 | orchestrator | Sunday 06 July 2025 20:14:42 +0000 (0:00:01.233) 0:08:36.286 *********** 2025-07-06 20:17:14.314713 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314718 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.314723 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.314728 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.314732 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.314739 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.314744 | orchestrator | 2025-07-06 20:17:14.314749 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:17:14.314754 | orchestrator | Sunday 06 July 2025 20:14:43 +0000 (0:00:00.597) 0:08:36.883 *********** 2025-07-06 20:17:14.314759 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314763 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.314768 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.314773 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.314778 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.314783 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.314787 | orchestrator | 2025-07-06 20:17:14.314792 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:17:14.314797 | orchestrator | Sunday 06 July 2025 20:14:44 +0000 (0:00:00.768) 0:08:37.651 *********** 2025-07-06 20:17:14.314802 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.314806 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.314811 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.314816 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.314821 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.314825 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.314830 | orchestrator | 2025-07-06 20:17:14.314835 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:17:14.314840 | orchestrator | Sunday 06 July 2025 20:14:45 +0000 (0:00:01.072) 0:08:38.724 *********** 2025-07-06 20:17:14.314844 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.314849 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.314854 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.314859 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.314863 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.314868 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.314873 | orchestrator | 2025-07-06 20:17:14.314877 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:17:14.314882 | orchestrator | Sunday 06 July 2025 20:14:46 +0000 (0:00:01.467) 0:08:40.191 *********** 2025-07-06 20:17:14.314887 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314892 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.314897 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.314901 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.314906 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.314911 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.314915 | orchestrator | 2025-07-06 20:17:14.314920 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:17:14.314925 | orchestrator | Sunday 06 July 2025 20:14:47 +0000 (0:00:00.632) 0:08:40.824 *********** 2025-07-06 20:17:14.314930 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.314935 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.314939 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.314944 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.314949 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.314954 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.314958 | orchestrator | 2025-07-06 20:17:14.314963 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:17:14.314968 | orchestrator | Sunday 06 July 2025 20:14:48 +0000 (0:00:00.874) 0:08:41.699 *********** 2025-07-06 20:17:14.314973 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.314978 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.314987 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.314992 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.314997 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.315001 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.315006 | orchestrator | 2025-07-06 20:17:14.315011 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:17:14.315016 | orchestrator | Sunday 06 July 2025 20:14:48 +0000 (0:00:00.585) 0:08:42.284 *********** 2025-07-06 20:17:14.315021 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.315025 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.315030 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.315035 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.315040 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.315045 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.315049 | orchestrator | 2025-07-06 20:17:14.315054 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:17:14.315059 | orchestrator | Sunday 06 July 2025 20:14:49 +0000 (0:00:00.774) 0:08:43.059 *********** 2025-07-06 20:17:14.315064 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.315068 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.315073 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.315078 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.315083 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.315090 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.315095 | orchestrator | 2025-07-06 20:17:14.315100 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:17:14.315105 | orchestrator | Sunday 06 July 2025 20:14:50 +0000 (0:00:00.595) 0:08:43.654 *********** 2025-07-06 20:17:14.315109 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.315114 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.315119 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.315123 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.315128 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.315133 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.315138 | orchestrator | 2025-07-06 20:17:14.315142 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:17:14.315147 | orchestrator | Sunday 06 July 2025 20:14:51 +0000 (0:00:00.795) 0:08:44.450 *********** 2025-07-06 20:17:14.315152 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.315157 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.315161 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.315166 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:14.315203 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:14.315208 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:14.315213 | orchestrator | 2025-07-06 20:17:14.315218 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:17:14.315223 | orchestrator | Sunday 06 July 2025 20:14:51 +0000 (0:00:00.579) 0:08:45.029 *********** 2025-07-06 20:17:14.315227 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.315232 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.315237 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.315245 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.315250 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.315255 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.315259 | orchestrator | 2025-07-06 20:17:14.315264 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:17:14.315269 | orchestrator | Sunday 06 July 2025 20:14:52 +0000 (0:00:00.821) 0:08:45.851 *********** 2025-07-06 20:17:14.315274 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.315279 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.315283 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.315288 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.315293 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.315297 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.315307 | orchestrator | 2025-07-06 20:17:14.315312 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:17:14.315316 | orchestrator | Sunday 06 July 2025 20:14:53 +0000 (0:00:00.624) 0:08:46.476 *********** 2025-07-06 20:17:14.315321 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.315325 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.315330 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.315334 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.315339 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.315343 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.315347 | orchestrator | 2025-07-06 20:17:14.315352 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-07-06 20:17:14.315357 | orchestrator | Sunday 06 July 2025 20:14:54 +0000 (0:00:01.241) 0:08:47.717 *********** 2025-07-06 20:17:14.315361 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:17:14.315366 | orchestrator | 2025-07-06 20:17:14.315370 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-07-06 20:17:14.315375 | orchestrator | Sunday 06 July 2025 20:14:58 +0000 (0:00:04.111) 0:08:51.828 *********** 2025-07-06 20:17:14.315379 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:17:14.315384 | orchestrator | 2025-07-06 20:17:14.315388 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-07-06 20:17:14.315393 | orchestrator | Sunday 06 July 2025 20:15:00 +0000 (0:00:02.009) 0:08:53.837 *********** 2025-07-06 20:17:14.315397 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.315402 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.315406 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.315411 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.315416 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.315420 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.315424 | orchestrator | 2025-07-06 20:17:14.315429 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-07-06 20:17:14.315434 | orchestrator | Sunday 06 July 2025 20:15:02 +0000 (0:00:01.675) 0:08:55.513 *********** 2025-07-06 20:17:14.315438 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.315443 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.315447 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.315452 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.315456 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.315460 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.315465 | orchestrator | 2025-07-06 20:17:14.315470 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-07-06 20:17:14.315474 | orchestrator | Sunday 06 July 2025 20:15:03 +0000 (0:00:00.954) 0:08:56.468 *********** 2025-07-06 20:17:14.315479 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.315484 | orchestrator | 2025-07-06 20:17:14.315488 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-07-06 20:17:14.315493 | orchestrator | Sunday 06 July 2025 20:15:04 +0000 (0:00:01.247) 0:08:57.715 *********** 2025-07-06 20:17:14.315497 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.315502 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.315507 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.315511 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.315515 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.315520 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.315524 | orchestrator | 2025-07-06 20:17:14.315529 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-07-06 20:17:14.315533 | orchestrator | Sunday 06 July 2025 20:15:06 +0000 (0:00:01.776) 0:08:59.492 *********** 2025-07-06 20:17:14.315538 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.315542 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.315550 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.315558 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.315562 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.315567 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.315571 | orchestrator | 2025-07-06 20:17:14.315576 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-07-06 20:17:14.315580 | orchestrator | Sunday 06 July 2025 20:15:09 +0000 (0:00:03.511) 0:09:03.004 *********** 2025-07-06 20:17:14.315585 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:14.315590 | orchestrator | 2025-07-06 20:17:14.315594 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-07-06 20:17:14.315599 | orchestrator | Sunday 06 July 2025 20:15:11 +0000 (0:00:01.518) 0:09:04.522 *********** 2025-07-06 20:17:14.315603 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.315608 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.315612 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.315617 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.315621 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.315626 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.315630 | orchestrator | 2025-07-06 20:17:14.315635 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-07-06 20:17:14.315639 | orchestrator | Sunday 06 July 2025 20:15:12 +0000 (0:00:00.893) 0:09:05.416 *********** 2025-07-06 20:17:14.315644 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.315649 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.315653 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.315660 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:14.315665 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:14.315669 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:14.315674 | orchestrator | 2025-07-06 20:17:14.315678 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-07-06 20:17:14.315683 | orchestrator | Sunday 06 July 2025 20:15:14 +0000 (0:00:02.242) 0:09:07.658 *********** 2025-07-06 20:17:14.315687 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.315692 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.315696 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.315701 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:14.315705 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:14.315710 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:14.315714 | orchestrator | 2025-07-06 20:17:14.315719 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-07-06 20:17:14.315723 | orchestrator | 2025-07-06 20:17:14.315728 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:17:14.315732 | orchestrator | Sunday 06 July 2025 20:15:15 +0000 (0:00:01.177) 0:09:08.836 *********** 2025-07-06 20:17:14.315737 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.315742 | orchestrator | 2025-07-06 20:17:14.315746 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:17:14.315751 | orchestrator | Sunday 06 July 2025 20:15:15 +0000 (0:00:00.516) 0:09:09.352 *********** 2025-07-06 20:17:14.315755 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.315760 | orchestrator | 2025-07-06 20:17:14.315764 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:17:14.315769 | orchestrator | Sunday 06 July 2025 20:15:16 +0000 (0:00:00.768) 0:09:10.121 *********** 2025-07-06 20:17:14.315773 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.315778 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.315783 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.315787 | orchestrator | 2025-07-06 20:17:14.315792 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:17:14.315800 | orchestrator | Sunday 06 July 2025 20:15:17 +0000 (0:00:00.331) 0:09:10.452 *********** 2025-07-06 20:17:14.315804 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.315809 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.315816 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.315823 | orchestrator | 2025-07-06 20:17:14.315830 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:17:14.315834 | orchestrator | Sunday 06 July 2025 20:15:17 +0000 (0:00:00.724) 0:09:11.177 *********** 2025-07-06 20:17:14.315839 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.315843 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.315848 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.315852 | orchestrator | 2025-07-06 20:17:14.315857 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:17:14.315861 | orchestrator | Sunday 06 July 2025 20:15:18 +0000 (0:00:00.980) 0:09:12.158 *********** 2025-07-06 20:17:14.315866 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.315870 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.315875 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.315879 | orchestrator | 2025-07-06 20:17:14.315884 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:17:14.315888 | orchestrator | Sunday 06 July 2025 20:15:19 +0000 (0:00:00.667) 0:09:12.825 *********** 2025-07-06 20:17:14.315893 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.315897 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.315902 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.315906 | orchestrator | 2025-07-06 20:17:14.315911 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:17:14.315915 | orchestrator | Sunday 06 July 2025 20:15:19 +0000 (0:00:00.284) 0:09:13.109 *********** 2025-07-06 20:17:14.315920 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.315924 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.315929 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.315933 | orchestrator | 2025-07-06 20:17:14.315938 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:17:14.315942 | orchestrator | Sunday 06 July 2025 20:15:19 +0000 (0:00:00.252) 0:09:13.361 *********** 2025-07-06 20:17:14.315947 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.315951 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.315958 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.315963 | orchestrator | 2025-07-06 20:17:14.315968 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:17:14.315972 | orchestrator | Sunday 06 July 2025 20:15:20 +0000 (0:00:00.454) 0:09:13.815 *********** 2025-07-06 20:17:14.315977 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.315981 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.315986 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.315990 | orchestrator | 2025-07-06 20:17:14.315995 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:17:14.315999 | orchestrator | Sunday 06 July 2025 20:15:21 +0000 (0:00:00.695) 0:09:14.511 *********** 2025-07-06 20:17:14.316004 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.316008 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.316013 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.316017 | orchestrator | 2025-07-06 20:17:14.316022 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:17:14.316026 | orchestrator | Sunday 06 July 2025 20:15:21 +0000 (0:00:00.831) 0:09:15.342 *********** 2025-07-06 20:17:14.316031 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.316035 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.316040 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.316044 | orchestrator | 2025-07-06 20:17:14.316049 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:17:14.316054 | orchestrator | Sunday 06 July 2025 20:15:22 +0000 (0:00:00.307) 0:09:15.649 *********** 2025-07-06 20:17:14.316066 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.316074 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.316081 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.316087 | orchestrator | 2025-07-06 20:17:14.316099 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:17:14.316107 | orchestrator | Sunday 06 July 2025 20:15:22 +0000 (0:00:00.587) 0:09:16.237 *********** 2025-07-06 20:17:14.316114 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.316121 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.316127 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.316131 | orchestrator | 2025-07-06 20:17:14.316136 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:17:14.316140 | orchestrator | Sunday 06 July 2025 20:15:23 +0000 (0:00:00.314) 0:09:16.552 *********** 2025-07-06 20:17:14.316145 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.316149 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.316154 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.316158 | orchestrator | 2025-07-06 20:17:14.316163 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:17:14.316178 | orchestrator | Sunday 06 July 2025 20:15:23 +0000 (0:00:00.338) 0:09:16.890 *********** 2025-07-06 20:17:14.316183 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.316188 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.316192 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.316197 | orchestrator | 2025-07-06 20:17:14.316201 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:17:14.316206 | orchestrator | Sunday 06 July 2025 20:15:23 +0000 (0:00:00.317) 0:09:17.207 *********** 2025-07-06 20:17:14.316210 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.316214 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.316219 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.316223 | orchestrator | 2025-07-06 20:17:14.316228 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:17:14.316232 | orchestrator | Sunday 06 July 2025 20:15:24 +0000 (0:00:00.555) 0:09:17.763 *********** 2025-07-06 20:17:14.316237 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.316241 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.316246 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.316250 | orchestrator | 2025-07-06 20:17:14.316255 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:17:14.316259 | orchestrator | Sunday 06 July 2025 20:15:24 +0000 (0:00:00.308) 0:09:18.072 *********** 2025-07-06 20:17:14.316264 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.316268 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.316273 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.316277 | orchestrator | 2025-07-06 20:17:14.316282 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:17:14.316286 | orchestrator | Sunday 06 July 2025 20:15:25 +0000 (0:00:00.338) 0:09:18.410 *********** 2025-07-06 20:17:14.316291 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.316295 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.316300 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.316304 | orchestrator | 2025-07-06 20:17:14.316308 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:17:14.316313 | orchestrator | Sunday 06 July 2025 20:15:25 +0000 (0:00:00.357) 0:09:18.768 *********** 2025-07-06 20:17:14.316317 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.316322 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.316326 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.316331 | orchestrator | 2025-07-06 20:17:14.316335 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-07-06 20:17:14.316340 | orchestrator | Sunday 06 July 2025 20:15:26 +0000 (0:00:00.833) 0:09:19.601 *********** 2025-07-06 20:17:14.316344 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.316349 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.316358 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-07-06 20:17:14.316362 | orchestrator | 2025-07-06 20:17:14.316366 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-07-06 20:17:14.316371 | orchestrator | Sunday 06 July 2025 20:15:26 +0000 (0:00:00.439) 0:09:20.041 *********** 2025-07-06 20:17:14.316375 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:17:14.316380 | orchestrator | 2025-07-06 20:17:14.316384 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-07-06 20:17:14.316389 | orchestrator | Sunday 06 July 2025 20:15:28 +0000 (0:00:02.093) 0:09:22.134 *********** 2025-07-06 20:17:14.316400 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-07-06 20:17:14.316406 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.316410 | orchestrator | 2025-07-06 20:17:14.316415 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-07-06 20:17:14.316419 | orchestrator | Sunday 06 July 2025 20:15:29 +0000 (0:00:00.254) 0:09:22.388 *********** 2025-07-06 20:17:14.316425 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:17:14.316434 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:17:14.316439 | orchestrator | 2025-07-06 20:17:14.316444 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-07-06 20:17:14.316448 | orchestrator | Sunday 06 July 2025 20:15:37 +0000 (0:00:08.702) 0:09:31.091 *********** 2025-07-06 20:17:14.316453 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:17:14.316457 | orchestrator | 2025-07-06 20:17:14.316464 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-07-06 20:17:14.316469 | orchestrator | Sunday 06 July 2025 20:15:41 +0000 (0:00:03.678) 0:09:34.769 *********** 2025-07-06 20:17:14.316474 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.316478 | orchestrator | 2025-07-06 20:17:14.316483 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-07-06 20:17:14.316487 | orchestrator | Sunday 06 July 2025 20:15:41 +0000 (0:00:00.551) 0:09:35.321 *********** 2025-07-06 20:17:14.316492 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-06 20:17:14.316496 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-06 20:17:14.316501 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-06 20:17:14.316505 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-07-06 20:17:14.316510 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-07-06 20:17:14.316514 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-07-06 20:17:14.316519 | orchestrator | 2025-07-06 20:17:14.316523 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-07-06 20:17:14.316528 | orchestrator | Sunday 06 July 2025 20:15:42 +0000 (0:00:01.022) 0:09:36.344 *********** 2025-07-06 20:17:14.316532 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.316537 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:17:14.316541 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:17:14.316549 | orchestrator | 2025-07-06 20:17:14.316554 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-07-06 20:17:14.316558 | orchestrator | Sunday 06 July 2025 20:15:45 +0000 (0:00:02.368) 0:09:38.713 *********** 2025-07-06 20:17:14.316562 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:17:14.316567 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:17:14.316572 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.316576 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:17:14.316581 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-06 20:17:14.316585 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.316589 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:17:14.316594 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-06 20:17:14.316598 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.316603 | orchestrator | 2025-07-06 20:17:14.316607 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-07-06 20:17:14.316612 | orchestrator | Sunday 06 July 2025 20:15:46 +0000 (0:00:01.444) 0:09:40.157 *********** 2025-07-06 20:17:14.316616 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.316621 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.316625 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.316630 | orchestrator | 2025-07-06 20:17:14.316634 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-07-06 20:17:14.316639 | orchestrator | Sunday 06 July 2025 20:15:49 +0000 (0:00:02.666) 0:09:42.823 *********** 2025-07-06 20:17:14.316643 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.316648 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.316652 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.316657 | orchestrator | 2025-07-06 20:17:14.316661 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-07-06 20:17:14.316666 | orchestrator | Sunday 06 July 2025 20:15:49 +0000 (0:00:00.300) 0:09:43.123 *********** 2025-07-06 20:17:14.316670 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.316675 | orchestrator | 2025-07-06 20:17:14.316679 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-07-06 20:17:14.316684 | orchestrator | Sunday 06 July 2025 20:15:50 +0000 (0:00:00.791) 0:09:43.914 *********** 2025-07-06 20:17:14.316688 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.316693 | orchestrator | 2025-07-06 20:17:14.316700 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-07-06 20:17:14.316705 | orchestrator | Sunday 06 July 2025 20:15:51 +0000 (0:00:00.510) 0:09:44.425 *********** 2025-07-06 20:17:14.316709 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.316714 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.316718 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.316723 | orchestrator | 2025-07-06 20:17:14.316727 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-07-06 20:17:14.316732 | orchestrator | Sunday 06 July 2025 20:15:52 +0000 (0:00:01.299) 0:09:45.725 *********** 2025-07-06 20:17:14.316736 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.316740 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.316745 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.316749 | orchestrator | 2025-07-06 20:17:14.316754 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-07-06 20:17:14.316759 | orchestrator | Sunday 06 July 2025 20:15:53 +0000 (0:00:01.454) 0:09:47.180 *********** 2025-07-06 20:17:14.316763 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.316768 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.316772 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.316776 | orchestrator | 2025-07-06 20:17:14.316781 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-07-06 20:17:14.316789 | orchestrator | Sunday 06 July 2025 20:15:55 +0000 (0:00:01.799) 0:09:48.980 *********** 2025-07-06 20:17:14.316794 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.316798 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.316803 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.316807 | orchestrator | 2025-07-06 20:17:14.316814 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-07-06 20:17:14.316819 | orchestrator | Sunday 06 July 2025 20:15:57 +0000 (0:00:02.070) 0:09:51.050 *********** 2025-07-06 20:17:14.316823 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.316828 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.316832 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.316837 | orchestrator | 2025-07-06 20:17:14.316841 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:17:14.316846 | orchestrator | Sunday 06 July 2025 20:15:59 +0000 (0:00:01.325) 0:09:52.376 *********** 2025-07-06 20:17:14.316850 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.316855 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.316859 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.316864 | orchestrator | 2025-07-06 20:17:14.316868 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-06 20:17:14.316873 | orchestrator | Sunday 06 July 2025 20:15:59 +0000 (0:00:00.674) 0:09:53.050 *********** 2025-07-06 20:17:14.316877 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.316882 | orchestrator | 2025-07-06 20:17:14.316886 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-06 20:17:14.316890 | orchestrator | Sunday 06 July 2025 20:16:00 +0000 (0:00:00.679) 0:09:53.730 *********** 2025-07-06 20:17:14.316895 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.316899 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.316904 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.316908 | orchestrator | 2025-07-06 20:17:14.316913 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-06 20:17:14.316917 | orchestrator | Sunday 06 July 2025 20:16:00 +0000 (0:00:00.334) 0:09:54.064 *********** 2025-07-06 20:17:14.316922 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.316926 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.316931 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.316935 | orchestrator | 2025-07-06 20:17:14.316939 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-06 20:17:14.316944 | orchestrator | Sunday 06 July 2025 20:16:01 +0000 (0:00:01.190) 0:09:55.255 *********** 2025-07-06 20:17:14.316948 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.316953 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.316957 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.316962 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.316966 | orchestrator | 2025-07-06 20:17:14.316971 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-06 20:17:14.316975 | orchestrator | Sunday 06 July 2025 20:16:02 +0000 (0:00:00.854) 0:09:56.109 *********** 2025-07-06 20:17:14.316980 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.316984 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.316989 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.316993 | orchestrator | 2025-07-06 20:17:14.316998 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-06 20:17:14.317002 | orchestrator | 2025-07-06 20:17:14.317006 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-06 20:17:14.317011 | orchestrator | Sunday 06 July 2025 20:16:03 +0000 (0:00:00.796) 0:09:56.905 *********** 2025-07-06 20:17:14.317015 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.317024 | orchestrator | 2025-07-06 20:17:14.317028 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-06 20:17:14.317033 | orchestrator | Sunday 06 July 2025 20:16:04 +0000 (0:00:00.489) 0:09:57.395 *********** 2025-07-06 20:17:14.317037 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.317042 | orchestrator | 2025-07-06 20:17:14.317046 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-06 20:17:14.317050 | orchestrator | Sunday 06 July 2025 20:16:04 +0000 (0:00:00.715) 0:09:58.110 *********** 2025-07-06 20:17:14.317055 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317059 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.317064 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.317068 | orchestrator | 2025-07-06 20:17:14.317075 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-06 20:17:14.317080 | orchestrator | Sunday 06 July 2025 20:16:05 +0000 (0:00:00.349) 0:09:58.460 *********** 2025-07-06 20:17:14.317084 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.317089 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.317093 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.317098 | orchestrator | 2025-07-06 20:17:14.317102 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-06 20:17:14.317107 | orchestrator | Sunday 06 July 2025 20:16:05 +0000 (0:00:00.702) 0:09:59.162 *********** 2025-07-06 20:17:14.317111 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.317115 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.317120 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.317124 | orchestrator | 2025-07-06 20:17:14.317129 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-06 20:17:14.317133 | orchestrator | Sunday 06 July 2025 20:16:06 +0000 (0:00:00.704) 0:09:59.867 *********** 2025-07-06 20:17:14.317137 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.317142 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.317146 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.317151 | orchestrator | 2025-07-06 20:17:14.317155 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-06 20:17:14.317160 | orchestrator | Sunday 06 July 2025 20:16:07 +0000 (0:00:01.088) 0:10:00.955 *********** 2025-07-06 20:17:14.317164 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317184 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.317188 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.317193 | orchestrator | 2025-07-06 20:17:14.317198 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-06 20:17:14.317205 | orchestrator | Sunday 06 July 2025 20:16:07 +0000 (0:00:00.312) 0:10:01.267 *********** 2025-07-06 20:17:14.317210 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317214 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.317219 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.317223 | orchestrator | 2025-07-06 20:17:14.317227 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-06 20:17:14.317232 | orchestrator | Sunday 06 July 2025 20:16:08 +0000 (0:00:00.296) 0:10:01.563 *********** 2025-07-06 20:17:14.317236 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317241 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.317245 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.317250 | orchestrator | 2025-07-06 20:17:14.317254 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-06 20:17:14.317259 | orchestrator | Sunday 06 July 2025 20:16:08 +0000 (0:00:00.309) 0:10:01.873 *********** 2025-07-06 20:17:14.317264 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.317268 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.317272 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.317277 | orchestrator | 2025-07-06 20:17:14.317281 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-06 20:17:14.317289 | orchestrator | Sunday 06 July 2025 20:16:09 +0000 (0:00:00.977) 0:10:02.851 *********** 2025-07-06 20:17:14.317294 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.317298 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.317303 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.317307 | orchestrator | 2025-07-06 20:17:14.317312 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-06 20:17:14.317316 | orchestrator | Sunday 06 July 2025 20:16:10 +0000 (0:00:00.738) 0:10:03.589 *********** 2025-07-06 20:17:14.317321 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317325 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.317329 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.317334 | orchestrator | 2025-07-06 20:17:14.317338 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-06 20:17:14.317343 | orchestrator | Sunday 06 July 2025 20:16:10 +0000 (0:00:00.310) 0:10:03.899 *********** 2025-07-06 20:17:14.317347 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317352 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.317356 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.317361 | orchestrator | 2025-07-06 20:17:14.317365 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-06 20:17:14.317370 | orchestrator | Sunday 06 July 2025 20:16:10 +0000 (0:00:00.288) 0:10:04.188 *********** 2025-07-06 20:17:14.317374 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.317379 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.317383 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.317387 | orchestrator | 2025-07-06 20:17:14.317392 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-06 20:17:14.317396 | orchestrator | Sunday 06 July 2025 20:16:11 +0000 (0:00:00.579) 0:10:04.768 *********** 2025-07-06 20:17:14.317401 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.317405 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.317410 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.317414 | orchestrator | 2025-07-06 20:17:14.317419 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-06 20:17:14.317423 | orchestrator | Sunday 06 July 2025 20:16:11 +0000 (0:00:00.350) 0:10:05.118 *********** 2025-07-06 20:17:14.317428 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.317432 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.317437 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.317441 | orchestrator | 2025-07-06 20:17:14.317446 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-06 20:17:14.317450 | orchestrator | Sunday 06 July 2025 20:16:12 +0000 (0:00:00.324) 0:10:05.443 *********** 2025-07-06 20:17:14.317455 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317459 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.317464 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.317468 | orchestrator | 2025-07-06 20:17:14.317472 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-06 20:17:14.317477 | orchestrator | Sunday 06 July 2025 20:16:12 +0000 (0:00:00.297) 0:10:05.741 *********** 2025-07-06 20:17:14.317481 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317486 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.317490 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.317495 | orchestrator | 2025-07-06 20:17:14.317499 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-06 20:17:14.317504 | orchestrator | Sunday 06 July 2025 20:16:12 +0000 (0:00:00.558) 0:10:06.299 *********** 2025-07-06 20:17:14.317511 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317516 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.317520 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.317524 | orchestrator | 2025-07-06 20:17:14.317529 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-06 20:17:14.317533 | orchestrator | Sunday 06 July 2025 20:16:13 +0000 (0:00:00.314) 0:10:06.613 *********** 2025-07-06 20:17:14.317541 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.317546 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.317550 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.317554 | orchestrator | 2025-07-06 20:17:14.317559 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-06 20:17:14.317563 | orchestrator | Sunday 06 July 2025 20:16:13 +0000 (0:00:00.322) 0:10:06.936 *********** 2025-07-06 20:17:14.317568 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.317572 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.317577 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.317581 | orchestrator | 2025-07-06 20:17:14.317586 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-07-06 20:17:14.317590 | orchestrator | Sunday 06 July 2025 20:16:14 +0000 (0:00:00.731) 0:10:07.668 *********** 2025-07-06 20:17:14.317595 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.317599 | orchestrator | 2025-07-06 20:17:14.317604 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-06 20:17:14.317608 | orchestrator | Sunday 06 July 2025 20:16:14 +0000 (0:00:00.519) 0:10:08.187 *********** 2025-07-06 20:17:14.317615 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.317620 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:17:14.317624 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:17:14.317629 | orchestrator | 2025-07-06 20:17:14.317633 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-06 20:17:14.317638 | orchestrator | Sunday 06 July 2025 20:16:16 +0000 (0:00:01.875) 0:10:10.062 *********** 2025-07-06 20:17:14.317642 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:17:14.317647 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-06 20:17:14.317651 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:17:14.317656 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.317660 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-06 20:17:14.317665 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.317669 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:17:14.317674 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-06 20:17:14.317678 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.317683 | orchestrator | 2025-07-06 20:17:14.317687 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-07-06 20:17:14.317692 | orchestrator | Sunday 06 July 2025 20:16:17 +0000 (0:00:01.246) 0:10:11.309 *********** 2025-07-06 20:17:14.317697 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317701 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.317705 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.317710 | orchestrator | 2025-07-06 20:17:14.317714 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-07-06 20:17:14.317719 | orchestrator | Sunday 06 July 2025 20:16:18 +0000 (0:00:00.280) 0:10:11.589 *********** 2025-07-06 20:17:14.317724 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.317728 | orchestrator | 2025-07-06 20:17:14.317733 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-07-06 20:17:14.317737 | orchestrator | Sunday 06 July 2025 20:16:18 +0000 (0:00:00.481) 0:10:12.071 *********** 2025-07-06 20:17:14.317742 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.317747 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.317751 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.317760 | orchestrator | 2025-07-06 20:17:14.317764 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-07-06 20:17:14.317769 | orchestrator | Sunday 06 July 2025 20:16:19 +0000 (0:00:01.094) 0:10:13.165 *********** 2025-07-06 20:17:14.317773 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.317778 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-06 20:17:14.317783 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.317787 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-06 20:17:14.317792 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.317796 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-06 20:17:14.317801 | orchestrator | 2025-07-06 20:17:14.317805 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-06 20:17:14.317812 | orchestrator | Sunday 06 July 2025 20:16:24 +0000 (0:00:04.860) 0:10:18.025 *********** 2025-07-06 20:17:14.317817 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.317821 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:17:14.317826 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.317830 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:17:14.317834 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:17:14.317839 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:17:14.317843 | orchestrator | 2025-07-06 20:17:14.317848 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-06 20:17:14.317852 | orchestrator | Sunday 06 July 2025 20:16:26 +0000 (0:00:02.180) 0:10:20.206 *********** 2025-07-06 20:17:14.317857 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:17:14.317861 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.317866 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:17:14.317870 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.317875 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:17:14.317879 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.317884 | orchestrator | 2025-07-06 20:17:14.317888 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-07-06 20:17:14.317893 | orchestrator | Sunday 06 July 2025 20:16:27 +0000 (0:00:01.111) 0:10:21.318 *********** 2025-07-06 20:17:14.317900 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-07-06 20:17:14.317904 | orchestrator | 2025-07-06 20:17:14.317909 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-07-06 20:17:14.317914 | orchestrator | Sunday 06 July 2025 20:16:28 +0000 (0:00:00.200) 0:10:21.518 *********** 2025-07-06 20:17:14.317918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:17:14.317923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:17:14.317928 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:17:14.317932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:17:14.317937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:17:14.317945 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317949 | orchestrator | 2025-07-06 20:17:14.317954 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-07-06 20:17:14.317958 | orchestrator | Sunday 06 July 2025 20:16:28 +0000 (0:00:00.837) 0:10:22.356 *********** 2025-07-06 20:17:14.317963 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:17:14.317967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:17:14.317972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:17:14.317976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:17:14.317981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-06 20:17:14.317985 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.317990 | orchestrator | 2025-07-06 20:17:14.317994 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-07-06 20:17:14.317999 | orchestrator | Sunday 06 July 2025 20:16:29 +0000 (0:00:00.526) 0:10:22.883 *********** 2025-07-06 20:17:14.318003 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-06 20:17:14.318008 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-06 20:17:14.318012 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-06 20:17:14.318033 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-06 20:17:14.318038 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-06 20:17:14.318043 | orchestrator | 2025-07-06 20:17:14.318047 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-07-06 20:17:14.318052 | orchestrator | Sunday 06 July 2025 20:17:00 +0000 (0:00:31.202) 0:10:54.085 *********** 2025-07-06 20:17:14.318059 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.318064 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.318068 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.318073 | orchestrator | 2025-07-06 20:17:14.318077 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-07-06 20:17:14.318082 | orchestrator | Sunday 06 July 2025 20:17:01 +0000 (0:00:00.303) 0:10:54.389 *********** 2025-07-06 20:17:14.318086 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.318091 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.318095 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.318100 | orchestrator | 2025-07-06 20:17:14.318104 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-07-06 20:17:14.318109 | orchestrator | Sunday 06 July 2025 20:17:01 +0000 (0:00:00.283) 0:10:54.673 *********** 2025-07-06 20:17:14.318113 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.318118 | orchestrator | 2025-07-06 20:17:14.318122 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-07-06 20:17:14.318127 | orchestrator | Sunday 06 July 2025 20:17:01 +0000 (0:00:00.637) 0:10:55.310 *********** 2025-07-06 20:17:14.318135 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.318139 | orchestrator | 2025-07-06 20:17:14.318144 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-07-06 20:17:14.318148 | orchestrator | Sunday 06 July 2025 20:17:02 +0000 (0:00:00.488) 0:10:55.798 *********** 2025-07-06 20:17:14.318156 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.318160 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.318165 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.318200 | orchestrator | 2025-07-06 20:17:14.318205 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-07-06 20:17:14.318209 | orchestrator | Sunday 06 July 2025 20:17:03 +0000 (0:00:01.211) 0:10:57.009 *********** 2025-07-06 20:17:14.318214 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.318218 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.318222 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.318227 | orchestrator | 2025-07-06 20:17:14.318231 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-07-06 20:17:14.318236 | orchestrator | Sunday 06 July 2025 20:17:04 +0000 (0:00:01.264) 0:10:58.274 *********** 2025-07-06 20:17:14.318240 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:17:14.318245 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:17:14.318249 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:17:14.318254 | orchestrator | 2025-07-06 20:17:14.318258 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-07-06 20:17:14.318262 | orchestrator | Sunday 06 July 2025 20:17:06 +0000 (0:00:01.781) 0:11:00.055 *********** 2025-07-06 20:17:14.318267 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.318271 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.318276 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-06 20:17:14.318281 | orchestrator | 2025-07-06 20:17:14.318285 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-06 20:17:14.318289 | orchestrator | Sunday 06 July 2025 20:17:09 +0000 (0:00:02.397) 0:11:02.453 *********** 2025-07-06 20:17:14.318294 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.318298 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.318303 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.318307 | orchestrator | 2025-07-06 20:17:14.318312 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-06 20:17:14.318316 | orchestrator | Sunday 06 July 2025 20:17:09 +0000 (0:00:00.298) 0:11:02.752 *********** 2025-07-06 20:17:14.318321 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:17:14.318325 | orchestrator | 2025-07-06 20:17:14.318329 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-06 20:17:14.318334 | orchestrator | Sunday 06 July 2025 20:17:09 +0000 (0:00:00.442) 0:11:03.194 *********** 2025-07-06 20:17:14.318338 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.318343 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.318347 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.318352 | orchestrator | 2025-07-06 20:17:14.318356 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-06 20:17:14.318360 | orchestrator | Sunday 06 July 2025 20:17:10 +0000 (0:00:00.454) 0:11:03.648 *********** 2025-07-06 20:17:14.318365 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.318369 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:17:14.318374 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:17:14.318378 | orchestrator | 2025-07-06 20:17:14.318383 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-06 20:17:14.318391 | orchestrator | Sunday 06 July 2025 20:17:10 +0000 (0:00:00.297) 0:11:03.946 *********** 2025-07-06 20:17:14.318396 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:17:14.318400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:17:14.318405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:17:14.318409 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:17:14.318413 | orchestrator | 2025-07-06 20:17:14.318418 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-06 20:17:14.318422 | orchestrator | Sunday 06 July 2025 20:17:11 +0000 (0:00:00.579) 0:11:04.525 *********** 2025-07-06 20:17:14.318427 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:17:14.318431 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:17:14.318440 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:17:14.318444 | orchestrator | 2025-07-06 20:17:14.318449 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:17:14.318453 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-07-06 20:17:14.318458 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-07-06 20:17:14.318463 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-07-06 20:17:14.318467 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-07-06 20:17:14.318472 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-07-06 20:17:14.318476 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-07-06 20:17:14.318480 | orchestrator | 2025-07-06 20:17:14.318485 | orchestrator | 2025-07-06 20:17:14.318490 | orchestrator | 2025-07-06 20:17:14.318497 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:17:14.318501 | orchestrator | Sunday 06 July 2025 20:17:11 +0000 (0:00:00.201) 0:11:04.726 *********** 2025-07-06 20:17:14.318506 | orchestrator | =============================================================================== 2025-07-06 20:17:14.318510 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 67.35s 2025-07-06 20:17:14.318515 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.27s 2025-07-06 20:17:14.318519 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.20s 2025-07-06 20:17:14.318524 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.26s 2025-07-06 20:17:14.318528 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.91s 2025-07-06 20:17:14.318532 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.88s 2025-07-06 20:17:14.318537 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.26s 2025-07-06 20:17:14.318541 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.31s 2025-07-06 20:17:14.318546 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.70s 2025-07-06 20:17:14.318550 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.70s 2025-07-06 20:17:14.318554 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.48s 2025-07-06 20:17:14.318559 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.31s 2025-07-06 20:17:14.318563 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.86s 2025-07-06 20:17:14.318568 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.64s 2025-07-06 20:17:14.318576 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.54s 2025-07-06 20:17:14.318580 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.11s 2025-07-06 20:17:14.318584 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.68s 2025-07-06 20:17:14.318589 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.51s 2025-07-06 20:17:14.318593 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.46s 2025-07-06 20:17:14.318598 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.45s 2025-07-06 20:17:17.346127 | orchestrator | 2025-07-06 20:17:17 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:17.347660 | orchestrator | 2025-07-06 20:17:17 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:17.349712 | orchestrator | 2025-07-06 20:17:17 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:17.349737 | orchestrator | 2025-07-06 20:17:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:20.384595 | orchestrator | 2025-07-06 20:17:20 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:20.386382 | orchestrator | 2025-07-06 20:17:20 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:20.388034 | orchestrator | 2025-07-06 20:17:20 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:20.388059 | orchestrator | 2025-07-06 20:17:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:23.426917 | orchestrator | 2025-07-06 20:17:23 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:23.429842 | orchestrator | 2025-07-06 20:17:23 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:23.431443 | orchestrator | 2025-07-06 20:17:23 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:23.431488 | orchestrator | 2025-07-06 20:17:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:26.476868 | orchestrator | 2025-07-06 20:17:26 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:26.477546 | orchestrator | 2025-07-06 20:17:26 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:26.478382 | orchestrator | 2025-07-06 20:17:26 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:26.478427 | orchestrator | 2025-07-06 20:17:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:29.526107 | orchestrator | 2025-07-06 20:17:29 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:29.526289 | orchestrator | 2025-07-06 20:17:29 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:29.526795 | orchestrator | 2025-07-06 20:17:29 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:29.526821 | orchestrator | 2025-07-06 20:17:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:32.565073 | orchestrator | 2025-07-06 20:17:32 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:32.566221 | orchestrator | 2025-07-06 20:17:32 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:32.568646 | orchestrator | 2025-07-06 20:17:32 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:32.568671 | orchestrator | 2025-07-06 20:17:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:35.612503 | orchestrator | 2025-07-06 20:17:35 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:35.616833 | orchestrator | 2025-07-06 20:17:35 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:35.619705 | orchestrator | 2025-07-06 20:17:35 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:35.620174 | orchestrator | 2025-07-06 20:17:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:38.662155 | orchestrator | 2025-07-06 20:17:38 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:38.664892 | orchestrator | 2025-07-06 20:17:38 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:38.667097 | orchestrator | 2025-07-06 20:17:38 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:38.667239 | orchestrator | 2025-07-06 20:17:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:41.720677 | orchestrator | 2025-07-06 20:17:41 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:41.722650 | orchestrator | 2025-07-06 20:17:41 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:41.724223 | orchestrator | 2025-07-06 20:17:41 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:41.724237 | orchestrator | 2025-07-06 20:17:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:44.768119 | orchestrator | 2025-07-06 20:17:44 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:44.770688 | orchestrator | 2025-07-06 20:17:44 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:44.771612 | orchestrator | 2025-07-06 20:17:44 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:44.771942 | orchestrator | 2025-07-06 20:17:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:47.818887 | orchestrator | 2025-07-06 20:17:47 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:47.821546 | orchestrator | 2025-07-06 20:17:47 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:47.823799 | orchestrator | 2025-07-06 20:17:47 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:47.824060 | orchestrator | 2025-07-06 20:17:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:50.869612 | orchestrator | 2025-07-06 20:17:50 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:50.871072 | orchestrator | 2025-07-06 20:17:50 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:50.873763 | orchestrator | 2025-07-06 20:17:50 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:50.873978 | orchestrator | 2025-07-06 20:17:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:53.920458 | orchestrator | 2025-07-06 20:17:53 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:53.922426 | orchestrator | 2025-07-06 20:17:53 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state STARTED 2025-07-06 20:17:53.924487 | orchestrator | 2025-07-06 20:17:53 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:53.924596 | orchestrator | 2025-07-06 20:17:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:17:56.966413 | orchestrator | 2025-07-06 20:17:56 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:17:56.968450 | orchestrator | 2025-07-06 20:17:56 | INFO  | Task 67052949-b675-4cdf-8ebf-3d1d55dfe020 is in state SUCCESS 2025-07-06 20:17:56.970736 | orchestrator | 2025-07-06 20:17:56.970821 | orchestrator | 2025-07-06 20:17:56.970836 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:17:56.970849 | orchestrator | 2025-07-06 20:17:56.970860 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:17:56.970872 | orchestrator | Sunday 06 July 2025 20:15:04 +0000 (0:00:00.287) 0:00:00.287 *********** 2025-07-06 20:17:56.970883 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:56.970895 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:17:56.970906 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:17:56.970916 | orchestrator | 2025-07-06 20:17:56.970927 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:17:56.970938 | orchestrator | Sunday 06 July 2025 20:15:04 +0000 (0:00:00.321) 0:00:00.608 *********** 2025-07-06 20:17:56.970950 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-07-06 20:17:56.970960 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-07-06 20:17:56.970971 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-07-06 20:17:56.970982 | orchestrator | 2025-07-06 20:17:56.970992 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-07-06 20:17:56.971003 | orchestrator | 2025-07-06 20:17:56.971013 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-06 20:17:56.971024 | orchestrator | Sunday 06 July 2025 20:15:05 +0000 (0:00:00.380) 0:00:00.989 *********** 2025-07-06 20:17:56.971035 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:56.971067 | orchestrator | 2025-07-06 20:17:56.971079 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-07-06 20:17:56.971090 | orchestrator | Sunday 06 July 2025 20:15:05 +0000 (0:00:00.445) 0:00:01.434 *********** 2025-07-06 20:17:56.971101 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 20:17:56.971111 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 20:17:56.971122 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-06 20:17:56.971132 | orchestrator | 2025-07-06 20:17:56.971143 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-07-06 20:17:56.971178 | orchestrator | Sunday 06 July 2025 20:15:06 +0000 (0:00:00.648) 0:00:02.083 *********** 2025-07-06 20:17:56.971193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.971223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.971271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.971289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.971306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.971326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.971348 | orchestrator | 2025-07-06 20:17:56.971362 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-06 20:17:56.971375 | orchestrator | Sunday 06 July 2025 20:15:08 +0000 (0:00:01.619) 0:00:03.702 *********** 2025-07-06 20:17:56.971388 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:56.971399 | orchestrator | 2025-07-06 20:17:56.971410 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-07-06 20:17:56.971421 | orchestrator | Sunday 06 July 2025 20:15:08 +0000 (0:00:00.477) 0:00:04.180 *********** 2025-07-06 20:17:56.971440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.971453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.971465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.971477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.971508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.971522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.971533 | orchestrator | 2025-07-06 20:17:56.971544 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-07-06 20:17:56.971556 | orchestrator | Sunday 06 July 2025 20:15:11 +0000 (0:00:02.848) 0:00:07.028 *********** 2025-07-06 20:17:56.971567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:17:56.971580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:17:56.971604 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:56.971616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:17:56.971636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:17:56.971648 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:56.971659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:17:56.971671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:17:56.971692 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:56.971703 | orchestrator | 2025-07-06 20:17:56.971714 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-07-06 20:17:56.971725 | orchestrator | Sunday 06 July 2025 20:15:12 +0000 (0:00:01.429) 0:00:08.457 *********** 2025-07-06 20:17:56.971741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:17:56.971760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:17:56.971772 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:56.971783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:17:56.971795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:17:56.971814 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:56.971830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-06 20:17:56.971851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-06 20:17:56.971862 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:56.971873 | orchestrator | 2025-07-06 20:17:56.971884 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-07-06 20:17:56.971895 | orchestrator | Sunday 06 July 2025 20:15:13 +0000 (0:00:00.795) 0:00:09.253 *********** 2025-07-06 20:17:56.971906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.971924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.971947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.971967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.971980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.971992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.972011 | orchestrator | 2025-07-06 20:17:56.972022 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-07-06 20:17:56.972033 | orchestrator | Sunday 06 July 2025 20:15:16 +0000 (0:00:02.412) 0:00:11.665 *********** 2025-07-06 20:17:56.972044 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:56.972055 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:56.972066 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:56.972076 | orchestrator | 2025-07-06 20:17:56.972087 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-07-06 20:17:56.972099 | orchestrator | Sunday 06 July 2025 20:15:19 +0000 (0:00:03.320) 0:00:14.986 *********** 2025-07-06 20:17:56.972109 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:56.972120 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:56.972131 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:56.972142 | orchestrator | 2025-07-06 20:17:56.972171 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-07-06 20:17:56.972183 | orchestrator | Sunday 06 July 2025 20:15:20 +0000 (0:00:01.635) 0:00:16.621 *********** 2025-07-06 20:17:56.972200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.972220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.972232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-06 20:17:56.972252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.972269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.972289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-06 20:17:56.972301 | orchestrator | 2025-07-06 20:17:56.972312 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-06 20:17:56.972323 | orchestrator | Sunday 06 July 2025 20:15:22 +0000 (0:00:01.899) 0:00:18.521 *********** 2025-07-06 20:17:56.972340 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:56.972351 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:17:56.972362 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:17:56.972373 | orchestrator | 2025-07-06 20:17:56.972384 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-06 20:17:56.972395 | orchestrator | Sunday 06 July 2025 20:15:23 +0000 (0:00:00.302) 0:00:18.823 *********** 2025-07-06 20:17:56.972406 | orchestrator | 2025-07-06 20:17:56.972417 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-06 20:17:56.972428 | orchestrator | Sunday 06 July 2025 20:15:23 +0000 (0:00:00.062) 0:00:18.886 *********** 2025-07-06 20:17:56.972438 | orchestrator | 2025-07-06 20:17:56.972449 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-06 20:17:56.972460 | orchestrator | Sunday 06 July 2025 20:15:23 +0000 (0:00:00.063) 0:00:18.949 *********** 2025-07-06 20:17:56.972471 | orchestrator | 2025-07-06 20:17:56.972482 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-07-06 20:17:56.972493 | orchestrator | Sunday 06 July 2025 20:15:23 +0000 (0:00:00.242) 0:00:19.191 *********** 2025-07-06 20:17:56.972504 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:56.972515 | orchestrator | 2025-07-06 20:17:56.972526 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-07-06 20:17:56.972536 | orchestrator | Sunday 06 July 2025 20:15:23 +0000 (0:00:00.214) 0:00:19.406 *********** 2025-07-06 20:17:56.972547 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:17:56.972558 | orchestrator | 2025-07-06 20:17:56.972569 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-07-06 20:17:56.972580 | orchestrator | Sunday 06 July 2025 20:15:23 +0000 (0:00:00.191) 0:00:19.597 *********** 2025-07-06 20:17:56.972591 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:56.972601 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:56.972612 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:56.972623 | orchestrator | 2025-07-06 20:17:56.972634 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-07-06 20:17:56.972645 | orchestrator | Sunday 06 July 2025 20:16:28 +0000 (0:01:04.424) 0:01:24.022 *********** 2025-07-06 20:17:56.972656 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:56.972666 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:17:56.972677 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:17:56.972688 | orchestrator | 2025-07-06 20:17:56.972699 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-06 20:17:56.972710 | orchestrator | Sunday 06 July 2025 20:17:44 +0000 (0:01:16.366) 0:02:40.388 *********** 2025-07-06 20:17:56.972721 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:17:56.972732 | orchestrator | 2025-07-06 20:17:56.972742 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-07-06 20:17:56.972753 | orchestrator | Sunday 06 July 2025 20:17:45 +0000 (0:00:00.602) 0:02:40.991 *********** 2025-07-06 20:17:56.972764 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:56.972775 | orchestrator | 2025-07-06 20:17:56.972786 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-07-06 20:17:56.972796 | orchestrator | Sunday 06 July 2025 20:17:47 +0000 (0:00:02.495) 0:02:43.486 *********** 2025-07-06 20:17:56.972807 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:17:56.972818 | orchestrator | 2025-07-06 20:17:56.972829 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-07-06 20:17:56.972840 | orchestrator | Sunday 06 July 2025 20:17:50 +0000 (0:00:02.255) 0:02:45.741 *********** 2025-07-06 20:17:56.972855 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:56.972866 | orchestrator | 2025-07-06 20:17:56.972877 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-07-06 20:17:56.972888 | orchestrator | Sunday 06 July 2025 20:17:52 +0000 (0:00:02.527) 0:02:48.269 *********** 2025-07-06 20:17:56.972905 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:17:56.972916 | orchestrator | 2025-07-06 20:17:56.972927 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:17:56.972938 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:17:56.972950 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:17:56.972961 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:17:56.972972 | orchestrator | 2025-07-06 20:17:56.972983 | orchestrator | 2025-07-06 20:17:56.972994 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:17:56.973010 | orchestrator | Sunday 06 July 2025 20:17:55 +0000 (0:00:02.492) 0:02:50.761 *********** 2025-07-06 20:17:56.973021 | orchestrator | =============================================================================== 2025-07-06 20:17:56.973031 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 76.37s 2025-07-06 20:17:56.973042 | orchestrator | opensearch : Restart opensearch container ------------------------------ 64.42s 2025-07-06 20:17:56.973053 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.32s 2025-07-06 20:17:56.973064 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.85s 2025-07-06 20:17:56.973075 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.53s 2025-07-06 20:17:56.973086 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.50s 2025-07-06 20:17:56.973096 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.49s 2025-07-06 20:17:56.973107 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.41s 2025-07-06 20:17:56.973118 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.26s 2025-07-06 20:17:56.973129 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.90s 2025-07-06 20:17:56.973140 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.64s 2025-07-06 20:17:56.973150 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.62s 2025-07-06 20:17:56.973192 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.43s 2025-07-06 20:17:56.973203 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.80s 2025-07-06 20:17:56.973214 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.65s 2025-07-06 20:17:56.973225 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.60s 2025-07-06 20:17:56.973235 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2025-07-06 20:17:56.973246 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.45s 2025-07-06 20:17:56.973257 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2025-07-06 20:17:56.973268 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.37s 2025-07-06 20:17:56.973279 | orchestrator | 2025-07-06 20:17:56 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:17:56.973290 | orchestrator | 2025-07-06 20:17:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:00.015524 | orchestrator | 2025-07-06 20:18:00 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:18:00.015627 | orchestrator | 2025-07-06 20:18:00 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:00.015642 | orchestrator | 2025-07-06 20:18:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:03.058114 | orchestrator | 2025-07-06 20:18:03 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:18:03.060284 | orchestrator | 2025-07-06 20:18:03 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:03.060324 | orchestrator | 2025-07-06 20:18:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:06.116004 | orchestrator | 2025-07-06 20:18:06 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state STARTED 2025-07-06 20:18:06.117490 | orchestrator | 2025-07-06 20:18:06 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:06.117530 | orchestrator | 2025-07-06 20:18:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:09.163849 | orchestrator | 2025-07-06 20:18:09.163988 | orchestrator | 2025-07-06 20:18:09.163998 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-07-06 20:18:09.164006 | orchestrator | 2025-07-06 20:18:09.164013 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-06 20:18:09.164070 | orchestrator | Sunday 06 July 2025 20:15:04 +0000 (0:00:00.107) 0:00:00.107 *********** 2025-07-06 20:18:09.164081 | orchestrator | ok: [localhost] => { 2025-07-06 20:18:09.164089 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-07-06 20:18:09.164096 | orchestrator | } 2025-07-06 20:18:09.164103 | orchestrator | 2025-07-06 20:18:09.164109 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-07-06 20:18:09.164116 | orchestrator | Sunday 06 July 2025 20:15:04 +0000 (0:00:00.042) 0:00:00.149 *********** 2025-07-06 20:18:09.164123 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-07-06 20:18:09.164131 | orchestrator | ...ignoring 2025-07-06 20:18:09.164138 | orchestrator | 2025-07-06 20:18:09.164144 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-07-06 20:18:09.164199 | orchestrator | Sunday 06 July 2025 20:15:07 +0000 (0:00:02.789) 0:00:02.938 *********** 2025-07-06 20:18:09.164313 | orchestrator | skipping: [localhost] 2025-07-06 20:18:09.164322 | orchestrator | 2025-07-06 20:18:09.164329 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-07-06 20:18:09.164335 | orchestrator | Sunday 06 July 2025 20:15:07 +0000 (0:00:00.044) 0:00:02.983 *********** 2025-07-06 20:18:09.164342 | orchestrator | ok: [localhost] 2025-07-06 20:18:09.164348 | orchestrator | 2025-07-06 20:18:09.164354 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:18:09.164361 | orchestrator | 2025-07-06 20:18:09.164367 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:18:09.164373 | orchestrator | Sunday 06 July 2025 20:15:07 +0000 (0:00:00.150) 0:00:03.133 *********** 2025-07-06 20:18:09.164380 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.164386 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:18:09.164392 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:18:09.164398 | orchestrator | 2025-07-06 20:18:09.164404 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:18:09.164411 | orchestrator | Sunday 06 July 2025 20:15:07 +0000 (0:00:00.261) 0:00:03.395 *********** 2025-07-06 20:18:09.164417 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-06 20:18:09.164424 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-06 20:18:09.164430 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-06 20:18:09.164436 | orchestrator | 2025-07-06 20:18:09.164442 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-06 20:18:09.164449 | orchestrator | 2025-07-06 20:18:09.164455 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-06 20:18:09.164461 | orchestrator | Sunday 06 July 2025 20:15:08 +0000 (0:00:00.387) 0:00:03.783 *********** 2025-07-06 20:18:09.164484 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-06 20:18:09.164491 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-06 20:18:09.164497 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-06 20:18:09.164503 | orchestrator | 2025-07-06 20:18:09.164509 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-06 20:18:09.164515 | orchestrator | Sunday 06 July 2025 20:15:08 +0000 (0:00:00.345) 0:00:04.128 *********** 2025-07-06 20:18:09.164521 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:18:09.164528 | orchestrator | 2025-07-06 20:18:09.164534 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-07-06 20:18:09.164540 | orchestrator | Sunday 06 July 2025 20:15:09 +0000 (0:00:00.557) 0:00:04.686 *********** 2025-07-06 20:18:09.164569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:18:09.164580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:18:09.164593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:18:09.164601 | orchestrator | 2025-07-06 20:18:09.164613 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-07-06 20:18:09.164620 | orchestrator | Sunday 06 July 2025 20:15:12 +0000 (0:00:03.252) 0:00:07.939 *********** 2025-07-06 20:18:09.164626 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.164633 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.164639 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.164649 | orchestrator | 2025-07-06 20:18:09.164655 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-07-06 20:18:09.164661 | orchestrator | Sunday 06 July 2025 20:15:13 +0000 (0:00:00.743) 0:00:08.682 *********** 2025-07-06 20:18:09.164668 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.164674 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.164680 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.164686 | orchestrator | 2025-07-06 20:18:09.164692 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-07-06 20:18:09.164699 | orchestrator | Sunday 06 July 2025 20:15:14 +0000 (0:00:01.376) 0:00:10.058 *********** 2025-07-06 20:18:09.164705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:18:09.164722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:18:09.164733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:18:09.164744 | orchestrator | 2025-07-06 20:18:09.164750 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-07-06 20:18:09.164756 | orchestrator | Sunday 06 July 2025 20:15:18 +0000 (0:00:04.265) 0:00:14.324 *********** 2025-07-06 20:18:09.164763 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.164769 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.164775 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.164781 | orchestrator | 2025-07-06 20:18:09.164788 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-07-06 20:18:09.164794 | orchestrator | Sunday 06 July 2025 20:15:19 +0000 (0:00:01.033) 0:00:15.357 *********** 2025-07-06 20:18:09.164800 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.164806 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:18:09.164812 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:18:09.164819 | orchestrator | 2025-07-06 20:18:09.164825 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-06 20:18:09.164831 | orchestrator | Sunday 06 July 2025 20:15:23 +0000 (0:00:03.958) 0:00:19.316 *********** 2025-07-06 20:18:09.164837 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:18:09.164844 | orchestrator | 2025-07-06 20:18:09.164850 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-06 20:18:09.164856 | orchestrator | Sunday 06 July 2025 20:15:24 +0000 (0:00:00.613) 0:00:19.930 *********** 2025-07-06 20:18:09.164871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:18:09.164883 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.164890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:18:09.164897 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.164911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:18:09.164918 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.164925 | orchestrator | 2025-07-06 20:18:09.164931 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-06 20:18:09.164942 | orchestrator | Sunday 06 July 2025 20:15:28 +0000 (0:00:03.655) 0:00:23.585 *********** 2025-07-06 20:18:09.164950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:18:09.164958 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.164970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:18:09.164978 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.164989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:18:09.165001 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.165009 | orchestrator | 2025-07-06 20:18:09.165016 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-06 20:18:09.165023 | orchestrator | Sunday 06 July 2025 20:15:30 +0000 (0:00:02.078) 0:00:25.664 *********** 2025-07-06 20:18:09.165032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:18:09.165040 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.165053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:18:09.165069 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.165141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-06 20:18:09.165178 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.165185 | orchestrator | 2025-07-06 20:18:09.165191 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-07-06 20:18:09.165198 | orchestrator | Sunday 06 July 2025 20:15:32 +0000 (0:00:02.263) 0:00:27.927 *********** 2025-07-06 20:18:09.165210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-07-06 20:18:09 | INFO  | Task f985ed98-b2e3-4fa7-a147-998601e8a215 is in state SUCCESS 2025-07-06 20:18:09.165226 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:18:09.165234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:18:09.165320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-06 20:18:09.165336 | orchestrator | 2025-07-06 20:18:09.165343 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-07-06 20:18:09.165349 | orchestrator | Sunday 06 July 2025 20:15:35 +0000 (0:00:02.847) 0:00:30.775 *********** 2025-07-06 20:18:09.165356 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.165362 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:18:09.165368 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:18:09.165374 | orchestrator | 2025-07-06 20:18:09.165380 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-07-06 20:18:09.165387 | orchestrator | Sunday 06 July 2025 20:15:36 +0000 (0:00:01.022) 0:00:31.798 *********** 2025-07-06 20:18:09.165393 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.165399 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:18:09.165405 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:18:09.165412 | orchestrator | 2025-07-06 20:18:09.165418 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-07-06 20:18:09.165424 | orchestrator | Sunday 06 July 2025 20:15:36 +0000 (0:00:00.385) 0:00:32.183 *********** 2025-07-06 20:18:09.165430 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.165437 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:18:09.165443 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:18:09.165449 | orchestrator | 2025-07-06 20:18:09.165456 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-07-06 20:18:09.165462 | orchestrator | Sunday 06 July 2025 20:15:36 +0000 (0:00:00.361) 0:00:32.545 *********** 2025-07-06 20:18:09.165469 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-07-06 20:18:09.165476 | orchestrator | ...ignoring 2025-07-06 20:18:09.165482 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-07-06 20:18:09.165489 | orchestrator | ...ignoring 2025-07-06 20:18:09.165495 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-07-06 20:18:09.165501 | orchestrator | ...ignoring 2025-07-06 20:18:09.165507 | orchestrator | 2025-07-06 20:18:09.165514 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-07-06 20:18:09.165520 | orchestrator | Sunday 06 July 2025 20:15:48 +0000 (0:00:11.062) 0:00:43.607 *********** 2025-07-06 20:18:09.165526 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.165532 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:18:09.165542 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:18:09.165548 | orchestrator | 2025-07-06 20:18:09.165555 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-07-06 20:18:09.165561 | orchestrator | Sunday 06 July 2025 20:15:48 +0000 (0:00:00.646) 0:00:44.253 *********** 2025-07-06 20:18:09.165567 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.165573 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.165579 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.165585 | orchestrator | 2025-07-06 20:18:09.165592 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-07-06 20:18:09.165598 | orchestrator | Sunday 06 July 2025 20:15:49 +0000 (0:00:00.426) 0:00:44.679 *********** 2025-07-06 20:18:09.165604 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.165610 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.165616 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.165622 | orchestrator | 2025-07-06 20:18:09.165628 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-07-06 20:18:09.165635 | orchestrator | Sunday 06 July 2025 20:15:49 +0000 (0:00:00.427) 0:00:45.107 *********** 2025-07-06 20:18:09.165641 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.165647 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.165653 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.165659 | orchestrator | 2025-07-06 20:18:09.165666 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-07-06 20:18:09.165672 | orchestrator | Sunday 06 July 2025 20:15:49 +0000 (0:00:00.411) 0:00:45.519 *********** 2025-07-06 20:18:09.165678 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.165684 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:18:09.165691 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:18:09.165697 | orchestrator | 2025-07-06 20:18:09.165707 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-07-06 20:18:09.165714 | orchestrator | Sunday 06 July 2025 20:15:50 +0000 (0:00:00.639) 0:00:46.159 *********** 2025-07-06 20:18:09.165720 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.165726 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.165732 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.165742 | orchestrator | 2025-07-06 20:18:09.165748 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-06 20:18:09.165755 | orchestrator | Sunday 06 July 2025 20:15:51 +0000 (0:00:00.416) 0:00:46.575 *********** 2025-07-06 20:18:09.165761 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.165767 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.165773 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-07-06 20:18:09.165780 | orchestrator | 2025-07-06 20:18:09.165786 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-07-06 20:18:09.165792 | orchestrator | Sunday 06 July 2025 20:15:51 +0000 (0:00:00.395) 0:00:46.971 *********** 2025-07-06 20:18:09.165798 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.165805 | orchestrator | 2025-07-06 20:18:09.165811 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-07-06 20:18:09.165817 | orchestrator | Sunday 06 July 2025 20:16:01 +0000 (0:00:10.019) 0:00:56.991 *********** 2025-07-06 20:18:09.165823 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.165829 | orchestrator | 2025-07-06 20:18:09.165836 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-06 20:18:09.165842 | orchestrator | Sunday 06 July 2025 20:16:01 +0000 (0:00:00.132) 0:00:57.123 *********** 2025-07-06 20:18:09.165848 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.165854 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.165861 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.165867 | orchestrator | 2025-07-06 20:18:09.165873 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-07-06 20:18:09.165879 | orchestrator | Sunday 06 July 2025 20:16:02 +0000 (0:00:01.001) 0:00:58.125 *********** 2025-07-06 20:18:09.165890 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.165896 | orchestrator | 2025-07-06 20:18:09.165903 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-07-06 20:18:09.165909 | orchestrator | Sunday 06 July 2025 20:16:10 +0000 (0:00:07.832) 0:01:05.957 *********** 2025-07-06 20:18:09.165915 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.165921 | orchestrator | 2025-07-06 20:18:09.165928 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-07-06 20:18:09.165934 | orchestrator | Sunday 06 July 2025 20:16:11 +0000 (0:00:01.550) 0:01:07.508 *********** 2025-07-06 20:18:09.165940 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.165946 | orchestrator | 2025-07-06 20:18:09.165952 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-07-06 20:18:09.165960 | orchestrator | Sunday 06 July 2025 20:16:14 +0000 (0:00:02.459) 0:01:09.967 *********** 2025-07-06 20:18:09.165967 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.165974 | orchestrator | 2025-07-06 20:18:09.165981 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-07-06 20:18:09.165989 | orchestrator | Sunday 06 July 2025 20:16:14 +0000 (0:00:00.113) 0:01:10.081 *********** 2025-07-06 20:18:09.165996 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.166003 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.166010 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.166069 | orchestrator | 2025-07-06 20:18:09.166077 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-07-06 20:18:09.166087 | orchestrator | Sunday 06 July 2025 20:16:15 +0000 (0:00:00.513) 0:01:10.595 *********** 2025-07-06 20:18:09.166098 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.166109 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-06 20:18:09.166119 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:18:09.166129 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:18:09.166140 | orchestrator | 2025-07-06 20:18:09.166197 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-06 20:18:09.166210 | orchestrator | skipping: no hosts matched 2025-07-06 20:18:09.166220 | orchestrator | 2025-07-06 20:18:09.166230 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-06 20:18:09.166240 | orchestrator | 2025-07-06 20:18:09.166251 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-06 20:18:09.166261 | orchestrator | Sunday 06 July 2025 20:16:15 +0000 (0:00:00.341) 0:01:10.937 *********** 2025-07-06 20:18:09.166272 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:18:09.166282 | orchestrator | 2025-07-06 20:18:09.166292 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-06 20:18:09.166300 | orchestrator | Sunday 06 July 2025 20:16:32 +0000 (0:00:17.289) 0:01:28.227 *********** 2025-07-06 20:18:09.166307 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:18:09.166313 | orchestrator | 2025-07-06 20:18:09.166319 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-06 20:18:09.166325 | orchestrator | Sunday 06 July 2025 20:16:53 +0000 (0:00:20.612) 0:01:48.839 *********** 2025-07-06 20:18:09.166331 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:18:09.166337 | orchestrator | 2025-07-06 20:18:09.166343 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-06 20:18:09.166350 | orchestrator | 2025-07-06 20:18:09.166359 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-06 20:18:09.166371 | orchestrator | Sunday 06 July 2025 20:16:55 +0000 (0:00:02.245) 0:01:51.085 *********** 2025-07-06 20:18:09.166382 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:18:09.166393 | orchestrator | 2025-07-06 20:18:09.166400 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-06 20:18:09.166406 | orchestrator | Sunday 06 July 2025 20:17:13 +0000 (0:00:17.792) 0:02:08.877 *********** 2025-07-06 20:18:09.166419 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:18:09.166425 | orchestrator | 2025-07-06 20:18:09.166432 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-06 20:18:09.166438 | orchestrator | Sunday 06 July 2025 20:17:33 +0000 (0:00:20.524) 0:02:29.402 *********** 2025-07-06 20:18:09.166451 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:18:09.166457 | orchestrator | 2025-07-06 20:18:09.166464 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-06 20:18:09.166470 | orchestrator | 2025-07-06 20:18:09.166476 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-06 20:18:09.166487 | orchestrator | Sunday 06 July 2025 20:17:36 +0000 (0:00:02.718) 0:02:32.120 *********** 2025-07-06 20:18:09.166494 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.166500 | orchestrator | 2025-07-06 20:18:09.166507 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-06 20:18:09.166514 | orchestrator | Sunday 06 July 2025 20:17:47 +0000 (0:00:10.665) 0:02:42.786 *********** 2025-07-06 20:18:09.166520 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.166526 | orchestrator | 2025-07-06 20:18:09.166534 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-06 20:18:09.166544 | orchestrator | Sunday 06 July 2025 20:17:51 +0000 (0:00:04.643) 0:02:47.430 *********** 2025-07-06 20:18:09.166554 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.166565 | orchestrator | 2025-07-06 20:18:09.166576 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-06 20:18:09.166586 | orchestrator | 2025-07-06 20:18:09.166596 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-06 20:18:09.166606 | orchestrator | Sunday 06 July 2025 20:17:54 +0000 (0:00:02.403) 0:02:49.833 *********** 2025-07-06 20:18:09.166620 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:18:09.166637 | orchestrator | 2025-07-06 20:18:09.166814 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-07-06 20:18:09.166833 | orchestrator | Sunday 06 July 2025 20:17:54 +0000 (0:00:00.532) 0:02:50.365 *********** 2025-07-06 20:18:09.166844 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.166855 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.166866 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.166877 | orchestrator | 2025-07-06 20:18:09.166887 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-07-06 20:18:09.166898 | orchestrator | Sunday 06 July 2025 20:17:57 +0000 (0:00:02.469) 0:02:52.835 *********** 2025-07-06 20:18:09.166909 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.167001 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.167017 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.167028 | orchestrator | 2025-07-06 20:18:09.167039 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-07-06 20:18:09.167050 | orchestrator | Sunday 06 July 2025 20:17:59 +0000 (0:00:02.040) 0:02:54.876 *********** 2025-07-06 20:18:09.167061 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.167072 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.167083 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.167093 | orchestrator | 2025-07-06 20:18:09.167104 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-07-06 20:18:09.167115 | orchestrator | Sunday 06 July 2025 20:18:01 +0000 (0:00:02.027) 0:02:56.903 *********** 2025-07-06 20:18:09.167126 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.167137 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.167148 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:18:09.167198 | orchestrator | 2025-07-06 20:18:09.167209 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-07-06 20:18:09.167220 | orchestrator | Sunday 06 July 2025 20:18:03 +0000 (0:00:02.039) 0:02:58.942 *********** 2025-07-06 20:18:09.167231 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:18:09.167242 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:18:09.167386 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:18:09.167407 | orchestrator | 2025-07-06 20:18:09.167425 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-06 20:18:09.167445 | orchestrator | Sunday 06 July 2025 20:18:06 +0000 (0:00:02.973) 0:03:01.916 *********** 2025-07-06 20:18:09.167463 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:18:09.167481 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:18:09.167492 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:18:09.167503 | orchestrator | 2025-07-06 20:18:09.167514 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:18:09.167525 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-06 20:18:09.167537 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-07-06 20:18:09.167550 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-06 20:18:09.167561 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-06 20:18:09.167571 | orchestrator | 2025-07-06 20:18:09.167582 | orchestrator | 2025-07-06 20:18:09.167593 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:18:09.167604 | orchestrator | Sunday 06 July 2025 20:18:06 +0000 (0:00:00.217) 0:03:02.134 *********** 2025-07-06 20:18:09.167615 | orchestrator | =============================================================================== 2025-07-06 20:18:09.167625 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.14s 2025-07-06 20:18:09.167636 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 35.08s 2025-07-06 20:18:09.167649 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.06s 2025-07-06 20:18:09.167662 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.67s 2025-07-06 20:18:09.167674 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.02s 2025-07-06 20:18:09.167703 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.83s 2025-07-06 20:18:09.167723 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.96s 2025-07-06 20:18:09.167741 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.64s 2025-07-06 20:18:09.167769 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.27s 2025-07-06 20:18:09.167788 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.96s 2025-07-06 20:18:09.167806 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.66s 2025-07-06 20:18:09.167885 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.25s 2025-07-06 20:18:09.167907 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.97s 2025-07-06 20:18:09.167948 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.85s 2025-07-06 20:18:09.167967 | orchestrator | Check MariaDB service --------------------------------------------------- 2.79s 2025-07-06 20:18:09.167984 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.47s 2025-07-06 20:18:09.168002 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.46s 2025-07-06 20:18:09.168019 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.40s 2025-07-06 20:18:09.168036 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.26s 2025-07-06 20:18:09.168053 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.08s 2025-07-06 20:18:09.168071 | orchestrator | 2025-07-06 20:18:09 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:09.168106 | orchestrator | 2025-07-06 20:18:09 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:09.168123 | orchestrator | 2025-07-06 20:18:09 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:09.168141 | orchestrator | 2025-07-06 20:18:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:12.225673 | orchestrator | 2025-07-06 20:18:12 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:12.227204 | orchestrator | 2025-07-06 20:18:12 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:12.228175 | orchestrator | 2025-07-06 20:18:12 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:12.228256 | orchestrator | 2025-07-06 20:18:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:15.265877 | orchestrator | 2025-07-06 20:18:15 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:15.269449 | orchestrator | 2025-07-06 20:18:15 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:15.269608 | orchestrator | 2025-07-06 20:18:15 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:15.270198 | orchestrator | 2025-07-06 20:18:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:18.317318 | orchestrator | 2025-07-06 20:18:18 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:18.318531 | orchestrator | 2025-07-06 20:18:18 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:18.319778 | orchestrator | 2025-07-06 20:18:18 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:18.319819 | orchestrator | 2025-07-06 20:18:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:21.352007 | orchestrator | 2025-07-06 20:18:21 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:21.353352 | orchestrator | 2025-07-06 20:18:21 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:21.354739 | orchestrator | 2025-07-06 20:18:21 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:21.354773 | orchestrator | 2025-07-06 20:18:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:24.393490 | orchestrator | 2025-07-06 20:18:24 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:24.395109 | orchestrator | 2025-07-06 20:18:24 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:24.395351 | orchestrator | 2025-07-06 20:18:24 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:24.396308 | orchestrator | 2025-07-06 20:18:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:27.433298 | orchestrator | 2025-07-06 20:18:27 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:27.435742 | orchestrator | 2025-07-06 20:18:27 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:27.437765 | orchestrator | 2025-07-06 20:18:27 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:27.438138 | orchestrator | 2025-07-06 20:18:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:30.467236 | orchestrator | 2025-07-06 20:18:30 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:30.467374 | orchestrator | 2025-07-06 20:18:30 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:30.468600 | orchestrator | 2025-07-06 20:18:30 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:30.468633 | orchestrator | 2025-07-06 20:18:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:33.511032 | orchestrator | 2025-07-06 20:18:33 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:33.513262 | orchestrator | 2025-07-06 20:18:33 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:33.514299 | orchestrator | 2025-07-06 20:18:33 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:33.514330 | orchestrator | 2025-07-06 20:18:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:36.568846 | orchestrator | 2025-07-06 20:18:36 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:36.568967 | orchestrator | 2025-07-06 20:18:36 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:36.568990 | orchestrator | 2025-07-06 20:18:36 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:36.569002 | orchestrator | 2025-07-06 20:18:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:39.616113 | orchestrator | 2025-07-06 20:18:39 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:39.618236 | orchestrator | 2025-07-06 20:18:39 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:39.619791 | orchestrator | 2025-07-06 20:18:39 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:39.619824 | orchestrator | 2025-07-06 20:18:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:42.665434 | orchestrator | 2025-07-06 20:18:42 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:42.667911 | orchestrator | 2025-07-06 20:18:42 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:42.670418 | orchestrator | 2025-07-06 20:18:42 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:42.670808 | orchestrator | 2025-07-06 20:18:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:45.713477 | orchestrator | 2025-07-06 20:18:45 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:45.715441 | orchestrator | 2025-07-06 20:18:45 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:45.717224 | orchestrator | 2025-07-06 20:18:45 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:45.717760 | orchestrator | 2025-07-06 20:18:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:48.755827 | orchestrator | 2025-07-06 20:18:48 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:48.755930 | orchestrator | 2025-07-06 20:18:48 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:48.757085 | orchestrator | 2025-07-06 20:18:48 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:48.757122 | orchestrator | 2025-07-06 20:18:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:51.800984 | orchestrator | 2025-07-06 20:18:51 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:51.804084 | orchestrator | 2025-07-06 20:18:51 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:51.807985 | orchestrator | 2025-07-06 20:18:51 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:51.808016 | orchestrator | 2025-07-06 20:18:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:54.852350 | orchestrator | 2025-07-06 20:18:54 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:54.852443 | orchestrator | 2025-07-06 20:18:54 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:54.854107 | orchestrator | 2025-07-06 20:18:54 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:54.854188 | orchestrator | 2025-07-06 20:18:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:18:57.903765 | orchestrator | 2025-07-06 20:18:57 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:18:57.904712 | orchestrator | 2025-07-06 20:18:57 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:18:57.907178 | orchestrator | 2025-07-06 20:18:57 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:18:57.907264 | orchestrator | 2025-07-06 20:18:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:00.946359 | orchestrator | 2025-07-06 20:19:00 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:00.947788 | orchestrator | 2025-07-06 20:19:00 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:00.949238 | orchestrator | 2025-07-06 20:19:00 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:19:00.949271 | orchestrator | 2025-07-06 20:19:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:03.996629 | orchestrator | 2025-07-06 20:19:03 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:03.999460 | orchestrator | 2025-07-06 20:19:03 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:04.003294 | orchestrator | 2025-07-06 20:19:04 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:19:04.003359 | orchestrator | 2025-07-06 20:19:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:07.055404 | orchestrator | 2025-07-06 20:19:07 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:07.058602 | orchestrator | 2025-07-06 20:19:07 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:07.059608 | orchestrator | 2025-07-06 20:19:07 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:19:07.059744 | orchestrator | 2025-07-06 20:19:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:10.096485 | orchestrator | 2025-07-06 20:19:10 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:10.099023 | orchestrator | 2025-07-06 20:19:10 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:10.101788 | orchestrator | 2025-07-06 20:19:10 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:19:10.101849 | orchestrator | 2025-07-06 20:19:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:13.148312 | orchestrator | 2025-07-06 20:19:13 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:13.149600 | orchestrator | 2025-07-06 20:19:13 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:13.151454 | orchestrator | 2025-07-06 20:19:13 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:19:13.151528 | orchestrator | 2025-07-06 20:19:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:16.196197 | orchestrator | 2025-07-06 20:19:16 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:16.197873 | orchestrator | 2025-07-06 20:19:16 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:16.199434 | orchestrator | 2025-07-06 20:19:16 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:19:16.199621 | orchestrator | 2025-07-06 20:19:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:19.246338 | orchestrator | 2025-07-06 20:19:19 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:19.248238 | orchestrator | 2025-07-06 20:19:19 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:19.250117 | orchestrator | 2025-07-06 20:19:19 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state STARTED 2025-07-06 20:19:19.250710 | orchestrator | 2025-07-06 20:19:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:22.298246 | orchestrator | 2025-07-06 20:19:22 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:22.299489 | orchestrator | 2025-07-06 20:19:22 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:22.301583 | orchestrator | 2025-07-06 20:19:22 | INFO  | Task 62424c6f-edf0-4359-9131-41391021e999 is in state STARTED 2025-07-06 20:19:22.305459 | orchestrator | 2025-07-06 20:19:22.305505 | orchestrator | 2025-07-06 20:19:22 | INFO  | Task 5d85f3c0-ac7a-408b-b935-f760d359d094 is in state SUCCESS 2025-07-06 20:19:22.305514 | orchestrator | 2025-07-06 20:19:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:22.307866 | orchestrator | 2025-07-06 20:19:22.307891 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-07-06 20:19:22.307896 | orchestrator | 2025-07-06 20:19:22.307900 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-06 20:19:22.307905 | orchestrator | Sunday 06 July 2025 20:17:15 +0000 (0:00:00.534) 0:00:00.534 *********** 2025-07-06 20:19:22.307909 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:19:22.307914 | orchestrator | 2025-07-06 20:19:22.307919 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-06 20:19:22.307923 | orchestrator | Sunday 06 July 2025 20:17:16 +0000 (0:00:00.526) 0:00:01.061 *********** 2025-07-06 20:19:22.307926 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.307932 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.307935 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.307939 | orchestrator | 2025-07-06 20:19:22.307943 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-06 20:19:22.307946 | orchestrator | Sunday 06 July 2025 20:17:16 +0000 (0:00:00.647) 0:00:01.708 *********** 2025-07-06 20:19:22.307950 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.307954 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.307958 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.307961 | orchestrator | 2025-07-06 20:19:22.307999 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-06 20:19:22.308003 | orchestrator | Sunday 06 July 2025 20:17:16 +0000 (0:00:00.260) 0:00:01.969 *********** 2025-07-06 20:19:22.308007 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.308011 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.308015 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.308019 | orchestrator | 2025-07-06 20:19:22.308023 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-06 20:19:22.308048 | orchestrator | Sunday 06 July 2025 20:17:17 +0000 (0:00:00.677) 0:00:02.646 *********** 2025-07-06 20:19:22.308054 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.308060 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.308066 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.308072 | orchestrator | 2025-07-06 20:19:22.308077 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-06 20:19:22.308083 | orchestrator | Sunday 06 July 2025 20:17:17 +0000 (0:00:00.281) 0:00:02.928 *********** 2025-07-06 20:19:22.308089 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.308094 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.308100 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.308106 | orchestrator | 2025-07-06 20:19:22.308111 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-06 20:19:22.308118 | orchestrator | Sunday 06 July 2025 20:17:18 +0000 (0:00:00.262) 0:00:03.190 *********** 2025-07-06 20:19:22.308123 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.308157 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.308164 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.308170 | orchestrator | 2025-07-06 20:19:22.308177 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-06 20:19:22.308183 | orchestrator | Sunday 06 July 2025 20:17:18 +0000 (0:00:00.277) 0:00:03.468 *********** 2025-07-06 20:19:22.308190 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308198 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.308204 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.308211 | orchestrator | 2025-07-06 20:19:22.308215 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-06 20:19:22.308219 | orchestrator | Sunday 06 July 2025 20:17:18 +0000 (0:00:00.381) 0:00:03.850 *********** 2025-07-06 20:19:22.308223 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.308227 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.308230 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.308234 | orchestrator | 2025-07-06 20:19:22.308240 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-06 20:19:22.308247 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:00.269) 0:00:04.119 *********** 2025-07-06 20:19:22.308263 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:19:22.308267 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:19:22.308271 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:19:22.308275 | orchestrator | 2025-07-06 20:19:22.308278 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-06 20:19:22.308282 | orchestrator | Sunday 06 July 2025 20:17:19 +0000 (0:00:00.569) 0:00:04.688 *********** 2025-07-06 20:19:22.308286 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.308293 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.308297 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.308301 | orchestrator | 2025-07-06 20:19:22.308305 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-06 20:19:22.308308 | orchestrator | Sunday 06 July 2025 20:17:20 +0000 (0:00:00.359) 0:00:05.048 *********** 2025-07-06 20:19:22.308312 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:19:22.308316 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:19:22.308319 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:19:22.308323 | orchestrator | 2025-07-06 20:19:22.308327 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-06 20:19:22.308330 | orchestrator | Sunday 06 July 2025 20:17:21 +0000 (0:00:01.962) 0:00:07.010 *********** 2025-07-06 20:19:22.308334 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-06 20:19:22.308338 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-06 20:19:22.308355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-06 20:19:22.308359 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308363 | orchestrator | 2025-07-06 20:19:22.308367 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-06 20:19:22.308379 | orchestrator | Sunday 06 July 2025 20:17:22 +0000 (0:00:00.354) 0:00:07.365 *********** 2025-07-06 20:19:22.308385 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.308392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.308396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.308400 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308403 | orchestrator | 2025-07-06 20:19:22.308407 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-06 20:19:22.308411 | orchestrator | Sunday 06 July 2025 20:17:22 +0000 (0:00:00.647) 0:00:08.013 *********** 2025-07-06 20:19:22.308417 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.308485 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.308490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.308494 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308499 | orchestrator | 2025-07-06 20:19:22.308503 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-06 20:19:22.308508 | orchestrator | Sunday 06 July 2025 20:17:23 +0000 (0:00:00.135) 0:00:08.148 *********** 2025-07-06 20:19:22.308514 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e4b0660d883b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-06 20:17:20.624432', 'end': '2025-07-06 20:17:20.675771', 'delta': '0:00:00.051339', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e4b0660d883b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-07-06 20:19:22.308524 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8d8cc0f31518', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-06 20:17:21.272208', 'end': '2025-07-06 20:17:21.313210', 'delta': '0:00:00.041002', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8d8cc0f31518'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-07-06 20:19:22.308539 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '4fcf94228996', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-06 20:17:21.803172', 'end': '2025-07-06 20:17:21.847769', 'delta': '0:00:00.044597', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['4fcf94228996'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-07-06 20:19:22.308544 | orchestrator | 2025-07-06 20:19:22.308548 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-06 20:19:22.308552 | orchestrator | Sunday 06 July 2025 20:17:23 +0000 (0:00:00.292) 0:00:08.440 *********** 2025-07-06 20:19:22.308556 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.308560 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.308564 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.308568 | orchestrator | 2025-07-06 20:19:22.308571 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-06 20:19:22.308575 | orchestrator | Sunday 06 July 2025 20:17:23 +0000 (0:00:00.398) 0:00:08.838 *********** 2025-07-06 20:19:22.308579 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-07-06 20:19:22.308583 | orchestrator | 2025-07-06 20:19:22.308587 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-06 20:19:22.308590 | orchestrator | Sunday 06 July 2025 20:17:25 +0000 (0:00:01.644) 0:00:10.483 *********** 2025-07-06 20:19:22.308594 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308821 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.308828 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.308831 | orchestrator | 2025-07-06 20:19:22.308835 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-06 20:19:22.308839 | orchestrator | Sunday 06 July 2025 20:17:25 +0000 (0:00:00.269) 0:00:10.753 *********** 2025-07-06 20:19:22.308842 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308846 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.308850 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.308853 | orchestrator | 2025-07-06 20:19:22.308857 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-06 20:19:22.308861 | orchestrator | Sunday 06 July 2025 20:17:26 +0000 (0:00:00.351) 0:00:11.104 *********** 2025-07-06 20:19:22.308865 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308868 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.308872 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.308876 | orchestrator | 2025-07-06 20:19:22.308879 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-06 20:19:22.308883 | orchestrator | Sunday 06 July 2025 20:17:26 +0000 (0:00:00.367) 0:00:11.472 *********** 2025-07-06 20:19:22.308887 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.308890 | orchestrator | 2025-07-06 20:19:22.308894 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-06 20:19:22.308902 | orchestrator | Sunday 06 July 2025 20:17:26 +0000 (0:00:00.123) 0:00:11.596 *********** 2025-07-06 20:19:22.308906 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308910 | orchestrator | 2025-07-06 20:19:22.308914 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-06 20:19:22.308917 | orchestrator | Sunday 06 July 2025 20:17:26 +0000 (0:00:00.222) 0:00:11.818 *********** 2025-07-06 20:19:22.308921 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308925 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.308929 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.308932 | orchestrator | 2025-07-06 20:19:22.308936 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-06 20:19:22.308940 | orchestrator | Sunday 06 July 2025 20:17:27 +0000 (0:00:00.252) 0:00:12.071 *********** 2025-07-06 20:19:22.308943 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308947 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.308951 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.308954 | orchestrator | 2025-07-06 20:19:22.308958 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-06 20:19:22.308962 | orchestrator | Sunday 06 July 2025 20:17:27 +0000 (0:00:00.271) 0:00:12.342 *********** 2025-07-06 20:19:22.308965 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308969 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.308973 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.308976 | orchestrator | 2025-07-06 20:19:22.308980 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-06 20:19:22.308984 | orchestrator | Sunday 06 July 2025 20:17:27 +0000 (0:00:00.396) 0:00:12.738 *********** 2025-07-06 20:19:22.308987 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.308991 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.308995 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.308999 | orchestrator | 2025-07-06 20:19:22.309002 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-06 20:19:22.309006 | orchestrator | Sunday 06 July 2025 20:17:27 +0000 (0:00:00.267) 0:00:13.006 *********** 2025-07-06 20:19:22.309010 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.309013 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.309017 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.309021 | orchestrator | 2025-07-06 20:19:22.309024 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-06 20:19:22.309028 | orchestrator | Sunday 06 July 2025 20:17:28 +0000 (0:00:00.280) 0:00:13.287 *********** 2025-07-06 20:19:22.309035 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.309039 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.309043 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.309047 | orchestrator | 2025-07-06 20:19:22.309050 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-06 20:19:22.309066 | orchestrator | Sunday 06 July 2025 20:17:28 +0000 (0:00:00.281) 0:00:13.568 *********** 2025-07-06 20:19:22.309070 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.309074 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.309078 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.309081 | orchestrator | 2025-07-06 20:19:22.309085 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-06 20:19:22.309089 | orchestrator | Sunday 06 July 2025 20:17:28 +0000 (0:00:00.409) 0:00:13.978 *********** 2025-07-06 20:19:22.309093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09-osd--block--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09', 'dm-uuid-LVM-rjbWF69KjZA1lciHg9IvVSUsIBY4Kg80WIL8NwVJDt2W0vvy1hn52SYPacAZrYqR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15-osd--block--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15', 'dm-uuid-LVM-CBB30BSMi7D1675QBE6Kop3W0221LIf87NC6xDU42NdnRR273XaCkk7Ufim7E7AZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309115 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309123 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part1', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part14', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part15', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part16', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31ad454b--c5b7--54ad--acab--5839a456146b-osd--block--31ad454b--c5b7--54ad--acab--5839a456146b', 'dm-uuid-LVM-uunT5FMuh4bQub73Mz82ISkwuGVkewOLiWo1mOL02qkKQjgxsiMP7ETVSaq2tpWH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09-osd--block--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QndfkN-PmDn-892W-SloC-8ojV-i8Ey-uDNKwa', 'scsi-0QEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b', 'scsi-SQEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2eb0e424--9f58--550c--b8cf--76c1b52e517a-osd--block--2eb0e424--9f58--550c--b8cf--76c1b52e517a', 'dm-uuid-LVM-PUe3Aihj8e3x89rT30vRYRaGSeZDlm0iypVWDZzCyZEN8aOrGAcRuQeVn3b2BvIO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15-osd--block--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VC67cJ-cfHh-yd2t-xcB2-EPLx-jHbU-PYUcy2', 'scsi-0QEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111', 'scsi-SQEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd', 'scsi-SQEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309274 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.309278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--31ad454b--c5b7--54ad--acab--5839a456146b-osd--block--31ad454b--c5b7--54ad--acab--5839a456146b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xKngM6-LQyz-Rj7F-7sve-UhFC-KKz3-x6W3RS', 'scsi-0QEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719', 'scsi-SQEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2eb0e424--9f58--550c--b8cf--76c1b52e517a-osd--block--2eb0e424--9f58--550c--b8cf--76c1b52e517a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aTjS2u-DqZb-KwhC-VTW1-S3tv-oUtt-8jR2oi', 'scsi-0QEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929', 'scsi-SQEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48', 'scsi-SQEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309313 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.309317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fc1251bd--e592--50b3--b197--385f411a7339-osd--block--fc1251bd--e592--50b3--b197--385f411a7339', 'dm-uuid-LVM-O0gBJzBTc4KRPexI3RumJDTRHsjXEAJqrmIUWQrIiVWfdDBmbIDQHG2A4MuCinJ5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5f0fce0--432f--57fb--bebd--426658f60987-osd--block--b5f0fce0--432f--57fb--bebd--426658f60987', 'dm-uuid-LVM-177gejIMY5lQSIa8RjRlJ1ZfVu8100q5WAzDShduKuNhHFM4DFY36XReOHU4dGHQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309352 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-06 20:19:22.309372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309380 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--fc1251bd--e592--50b3--b197--385f411a7339-osd--block--fc1251bd--e592--50b3--b197--385f411a7339'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RyHzxU-aBiw-OJMc-20Q4-Jk3v-wYcp-56OPxc', 'scsi-0QEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332', 'scsi-SQEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309384 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b5f0fce0--432f--57fb--bebd--426658f60987-osd--block--b5f0fce0--432f--57fb--bebd--426658f60987'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eNwe9v-UwcW-mdfT-UY3c-3ejI-jUS1-pkX8o1', 'scsi-0QEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a', 'scsi-SQEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5', 'scsi-SQEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-06 20:19:22.309406 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.309410 | orchestrator | 2025-07-06 20:19:22.309414 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-06 20:19:22.309417 | orchestrator | Sunday 06 July 2025 20:17:29 +0000 (0:00:00.500) 0:00:14.478 *********** 2025-07-06 20:19:22.309422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09-osd--block--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09', 'dm-uuid-LVM-rjbWF69KjZA1lciHg9IvVSUsIBY4Kg80WIL8NwVJDt2W0vvy1hn52SYPacAZrYqR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309426 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15-osd--block--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15', 'dm-uuid-LVM-CBB30BSMi7D1675QBE6Kop3W0221LIf87NC6xDU42NdnRR273XaCkk7Ufim7E7AZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309430 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309438 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309451 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309459 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--31ad454b--c5b7--54ad--acab--5839a456146b-osd--block--31ad454b--c5b7--54ad--acab--5839a456146b', 'dm-uuid-LVM-uunT5FMuh4bQub73Mz82ISkwuGVkewOLiWo1mOL02qkKQjgxsiMP7ETVSaq2tpWH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309463 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2eb0e424--9f58--550c--b8cf--76c1b52e517a-osd--block--2eb0e424--9f58--550c--b8cf--76c1b52e517a', 'dm-uuid-LVM-PUe3Aihj8e3x89rT30vRYRaGSeZDlm0iypVWDZzCyZEN8aOrGAcRuQeVn3b2BvIO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309472 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309496 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309501 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part1', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part14', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part15', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part16', 'scsi-SQEMU_QEMU_HARDDISK_8571afc0-e036-46a5-988a-49c98e90c838-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309521 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09-osd--block--22d6bcb2--409c--5bf5--80b4--f4dcfc8f2a09'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-QndfkN-PmDn-892W-SloC-8ojV-i8Ey-uDNKwa', 'scsi-0QEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b', 'scsi-SQEMU_QEMU_HARDDISK_3c29cd91-58e9-42ce-8653-990321e9d76b'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309526 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309531 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15-osd--block--1256d0fb--e60f--50ff--afd8--4edc5f2c0a15'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VC67cJ-cfHh-yd2t-xcB2-EPLx-jHbU-PYUcy2', 'scsi-0QEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111', 'scsi-SQEMU_QEMU_HARDDISK_fd99b70f-8aa3-4e15-8e66-07a34fe10111'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309535 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd', 'scsi-SQEMU_QEMU_HARDDISK_0c9a7d91-c8fc-48f8-acad-853231e255dd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309554 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309563 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309567 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309571 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.309586 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_c4ee01e1-6308-4ea1-8b72-f8c7aa0af3e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--31ad454b--c5b7--54ad--acab--5839a456146b-osd--block--31ad454b--c5b7--54ad--acab--5839a456146b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-xKngM6-LQyz-Rj7F-7sve-UhFC-KKz3-x6W3RS', 'scsi-0QEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719', 'scsi-SQEMU_QEMU_HARDDISK_c523d18d-f688-4547-bb4c-d63e44be8719'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2eb0e424--9f58--550c--b8cf--76c1b52e517a-osd--block--2eb0e424--9f58--550c--b8cf--76c1b52e517a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aTjS2u-DqZb-KwhC-VTW1-S3tv-oUtt-8jR2oi', 'scsi-0QEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929', 'scsi-SQEMU_QEMU_HARDDISK_e42fce45-67a3-477c-881f-6db38785a929'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309604 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48', 'scsi-SQEMU_QEMU_HARDDISK_28d32b1f-54bf-4890-9371-a2140c9d3e48'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309614 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-25-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309619 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.309623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fc1251bd--e592--50b3--b197--385f411a7339-osd--block--fc1251bd--e592--50b3--b197--385f411a7339', 'dm-uuid-LVM-O0gBJzBTc4KRPexI3RumJDTRHsjXEAJqrmIUWQrIiVWfdDBmbIDQHG2A4MuCinJ5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309628 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b5f0fce0--432f--57fb--bebd--426658f60987-osd--block--b5f0fce0--432f--57fb--bebd--426658f60987', 'dm-uuid-LVM-177gejIMY5lQSIa8RjRlJ1ZfVu8100q5WAzDShduKuNhHFM4DFY36XReOHU4dGHQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309632 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309640 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309644 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309654 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309659 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309663 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309667 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309671 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part1', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part14', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part15', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part16', 'scsi-SQEMU_QEMU_HARDDISK_8627e713-83d6-40b5-b9ad-70826e27e3e5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309690 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--fc1251bd--e592--50b3--b197--385f411a7339-osd--block--fc1251bd--e592--50b3--b197--385f411a7339'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RyHzxU-aBiw-OJMc-20Q4-Jk3v-wYcp-56OPxc', 'scsi-0QEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332', 'scsi-SQEMU_QEMU_HARDDISK_4a0eaf3f-1395-4073-9878-c6e703eff332'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b5f0fce0--432f--57fb--bebd--426658f60987-osd--block--b5f0fce0--432f--57fb--bebd--426658f60987'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eNwe9v-UwcW-mdfT-UY3c-3ejI-jUS1-pkX8o1', 'scsi-0QEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a', 'scsi-SQEMU_QEMU_HARDDISK_1751cfdb-b4ca-4b06-9fa0-b986eec2737a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309701 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5', 'scsi-SQEMU_QEMU_HARDDISK_29aeef2c-15f7-4912-be6e-922934b043d5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309711 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-06-19-22-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-06 20:19:22.309715 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.309720 | orchestrator | 2025-07-06 20:19:22.309724 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-06 20:19:22.309728 | orchestrator | Sunday 06 July 2025 20:17:29 +0000 (0:00:00.537) 0:00:15.016 *********** 2025-07-06 20:19:22.309733 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.309737 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.309741 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.309745 | orchestrator | 2025-07-06 20:19:22.309749 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-06 20:19:22.309754 | orchestrator | Sunday 06 July 2025 20:17:30 +0000 (0:00:00.638) 0:00:15.655 *********** 2025-07-06 20:19:22.309758 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.309762 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.309767 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.309771 | orchestrator | 2025-07-06 20:19:22.309775 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-06 20:19:22.309780 | orchestrator | Sunday 06 July 2025 20:17:30 +0000 (0:00:00.356) 0:00:16.011 *********** 2025-07-06 20:19:22.309784 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.309788 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.309792 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.309796 | orchestrator | 2025-07-06 20:19:22.309801 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-06 20:19:22.309805 | orchestrator | Sunday 06 July 2025 20:17:31 +0000 (0:00:00.634) 0:00:16.646 *********** 2025-07-06 20:19:22.309809 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.309814 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.309818 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.309827 | orchestrator | 2025-07-06 20:19:22.309831 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-06 20:19:22.309835 | orchestrator | Sunday 06 July 2025 20:17:31 +0000 (0:00:00.257) 0:00:16.903 *********** 2025-07-06 20:19:22.309839 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.309842 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.309846 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.309850 | orchestrator | 2025-07-06 20:19:22.309854 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-06 20:19:22.309857 | orchestrator | Sunday 06 July 2025 20:17:32 +0000 (0:00:00.390) 0:00:17.293 *********** 2025-07-06 20:19:22.309861 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.309865 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.309868 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.309872 | orchestrator | 2025-07-06 20:19:22.309876 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-06 20:19:22.309879 | orchestrator | Sunday 06 July 2025 20:17:32 +0000 (0:00:00.426) 0:00:17.720 *********** 2025-07-06 20:19:22.309883 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-06 20:19:22.309887 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-06 20:19:22.309891 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-06 20:19:22.309895 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-06 20:19:22.309898 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-06 20:19:22.309902 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-06 20:19:22.309906 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-06 20:19:22.309910 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-06 20:19:22.309913 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-06 20:19:22.309917 | orchestrator | 2025-07-06 20:19:22.309921 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-06 20:19:22.309924 | orchestrator | Sunday 06 July 2025 20:17:33 +0000 (0:00:00.762) 0:00:18.483 *********** 2025-07-06 20:19:22.309928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-06 20:19:22.309932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-06 20:19:22.309935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-06 20:19:22.309939 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.309943 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-06 20:19:22.309946 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-06 20:19:22.309950 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-06 20:19:22.309954 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.309957 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-06 20:19:22.309961 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-06 20:19:22.309965 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-06 20:19:22.309969 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.309972 | orchestrator | 2025-07-06 20:19:22.309976 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-06 20:19:22.309980 | orchestrator | Sunday 06 July 2025 20:17:33 +0000 (0:00:00.320) 0:00:18.803 *********** 2025-07-06 20:19:22.309984 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:19:22.309987 | orchestrator | 2025-07-06 20:19:22.309991 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-06 20:19:22.309997 | orchestrator | Sunday 06 July 2025 20:17:34 +0000 (0:00:00.573) 0:00:19.377 *********** 2025-07-06 20:19:22.310001 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.310005 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.310008 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.310042 | orchestrator | 2025-07-06 20:19:22.310050 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-06 20:19:22.310054 | orchestrator | Sunday 06 July 2025 20:17:34 +0000 (0:00:00.328) 0:00:19.706 *********** 2025-07-06 20:19:22.310057 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.310061 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.310065 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.310069 | orchestrator | 2025-07-06 20:19:22.310072 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-06 20:19:22.310076 | orchestrator | Sunday 06 July 2025 20:17:34 +0000 (0:00:00.298) 0:00:20.004 *********** 2025-07-06 20:19:22.310080 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.310084 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.310087 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:19:22.310091 | orchestrator | 2025-07-06 20:19:22.310095 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-06 20:19:22.310098 | orchestrator | Sunday 06 July 2025 20:17:35 +0000 (0:00:00.333) 0:00:20.338 *********** 2025-07-06 20:19:22.310102 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.310106 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.310110 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.310113 | orchestrator | 2025-07-06 20:19:22.310117 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-06 20:19:22.310121 | orchestrator | Sunday 06 July 2025 20:17:35 +0000 (0:00:00.569) 0:00:20.908 *********** 2025-07-06 20:19:22.310124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:19:22.310143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:19:22.310150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:19:22.310156 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.310164 | orchestrator | 2025-07-06 20:19:22.310170 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-06 20:19:22.310175 | orchestrator | Sunday 06 July 2025 20:17:36 +0000 (0:00:00.360) 0:00:21.268 *********** 2025-07-06 20:19:22.310181 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:19:22.310188 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:19:22.310192 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:19:22.310196 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.310199 | orchestrator | 2025-07-06 20:19:22.310203 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-06 20:19:22.310207 | orchestrator | Sunday 06 July 2025 20:17:36 +0000 (0:00:00.379) 0:00:21.647 *********** 2025-07-06 20:19:22.310210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-06 20:19:22.310214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-06 20:19:22.310218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-06 20:19:22.310221 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.310225 | orchestrator | 2025-07-06 20:19:22.310229 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-06 20:19:22.310232 | orchestrator | Sunday 06 July 2025 20:17:36 +0000 (0:00:00.357) 0:00:22.005 *********** 2025-07-06 20:19:22.310236 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:19:22.310240 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:19:22.310243 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:19:22.310247 | orchestrator | 2025-07-06 20:19:22.310251 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-06 20:19:22.310254 | orchestrator | Sunday 06 July 2025 20:17:37 +0000 (0:00:00.397) 0:00:22.402 *********** 2025-07-06 20:19:22.310258 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-06 20:19:22.310262 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-06 20:19:22.310266 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-06 20:19:22.310269 | orchestrator | 2025-07-06 20:19:22.310278 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-06 20:19:22.310282 | orchestrator | Sunday 06 July 2025 20:17:37 +0000 (0:00:00.594) 0:00:22.996 *********** 2025-07-06 20:19:22.310286 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:19:22.310290 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:19:22.310293 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:19:22.310297 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-06 20:19:22.310301 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-06 20:19:22.310305 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-06 20:19:22.310308 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-06 20:19:22.310312 | orchestrator | 2025-07-06 20:19:22.310316 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-06 20:19:22.310319 | orchestrator | Sunday 06 July 2025 20:17:38 +0000 (0:00:01.005) 0:00:24.002 *********** 2025-07-06 20:19:22.310323 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-06 20:19:22.310327 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-06 20:19:22.310330 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-06 20:19:22.310334 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-06 20:19:22.310338 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-06 20:19:22.310345 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-06 20:19:22.310348 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-06 20:19:22.310352 | orchestrator | 2025-07-06 20:19:22.310358 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-07-06 20:19:22.310362 | orchestrator | Sunday 06 July 2025 20:17:40 +0000 (0:00:01.993) 0:00:25.996 *********** 2025-07-06 20:19:22.310366 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:19:22.310369 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:19:22.310373 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-07-06 20:19:22.310377 | orchestrator | 2025-07-06 20:19:22.310381 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-07-06 20:19:22.310384 | orchestrator | Sunday 06 July 2025 20:17:41 +0000 (0:00:00.369) 0:00:26.365 *********** 2025-07-06 20:19:22.310388 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:19:22.310393 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:19:22.310397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:19:22.310401 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:19:22.310408 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-06 20:19:22.310412 | orchestrator | 2025-07-06 20:19:22.310416 | orchestrator | TASK [generate keys] *********************************************************** 2025-07-06 20:19:22.310420 | orchestrator | Sunday 06 July 2025 20:18:26 +0000 (0:00:44.736) 0:01:11.101 *********** 2025-07-06 20:19:22.310424 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310427 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310431 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310435 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310439 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310442 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310446 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-07-06 20:19:22.310450 | orchestrator | 2025-07-06 20:19:22.310453 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-07-06 20:19:22.310457 | orchestrator | Sunday 06 July 2025 20:18:50 +0000 (0:00:24.153) 0:01:35.255 *********** 2025-07-06 20:19:22.310461 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310465 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310468 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310472 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310476 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310480 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310483 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-06 20:19:22.310487 | orchestrator | 2025-07-06 20:19:22.310491 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-07-06 20:19:22.310495 | orchestrator | Sunday 06 July 2025 20:19:02 +0000 (0:00:12.232) 0:01:47.487 *********** 2025-07-06 20:19:22.310498 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310502 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:19:22.310506 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:19:22.310509 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310516 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:19:22.310520 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:19:22.310525 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310529 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:19:22.310533 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:19:22.310537 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310540 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:19:22.310544 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:19:22.310548 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310555 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:19:22.310559 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:19:22.310563 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-06 20:19:22.310567 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-06 20:19:22.310570 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-06 20:19:22.310574 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-07-06 20:19:22.310578 | orchestrator | 2025-07-06 20:19:22.310582 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:19:22.310586 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-07-06 20:19:22.310591 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-06 20:19:22.310595 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-06 20:19:22.310599 | orchestrator | 2025-07-06 20:19:22.310602 | orchestrator | 2025-07-06 20:19:22.310606 | orchestrator | 2025-07-06 20:19:22.310610 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:19:22.310614 | orchestrator | Sunday 06 July 2025 20:19:20 +0000 (0:00:17.808) 0:02:05.296 *********** 2025-07-06 20:19:22.310617 | orchestrator | =============================================================================== 2025-07-06 20:19:22.310621 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.74s 2025-07-06 20:19:22.310625 | orchestrator | generate keys ---------------------------------------------------------- 24.15s 2025-07-06 20:19:22.310629 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.81s 2025-07-06 20:19:22.310632 | orchestrator | get keys from monitors ------------------------------------------------- 12.23s 2025-07-06 20:19:22.310636 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.99s 2025-07-06 20:19:22.310640 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.96s 2025-07-06 20:19:22.310644 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.65s 2025-07-06 20:19:22.310647 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.01s 2025-07-06 20:19:22.310651 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.76s 2025-07-06 20:19:22.310655 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.68s 2025-07-06 20:19:22.310658 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.65s 2025-07-06 20:19:22.310662 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.65s 2025-07-06 20:19:22.310666 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.64s 2025-07-06 20:19:22.310670 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2025-07-06 20:19:22.310673 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.59s 2025-07-06 20:19:22.310677 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.57s 2025-07-06 20:19:22.310681 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.57s 2025-07-06 20:19:22.310685 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.57s 2025-07-06 20:19:22.310688 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.54s 2025-07-06 20:19:22.310692 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.53s 2025-07-06 20:19:25.339550 | orchestrator | 2025-07-06 20:19:25 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:25.340645 | orchestrator | 2025-07-06 20:19:25 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:25.341830 | orchestrator | 2025-07-06 20:19:25 | INFO  | Task 62424c6f-edf0-4359-9131-41391021e999 is in state STARTED 2025-07-06 20:19:25.341843 | orchestrator | 2025-07-06 20:19:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:28.387911 | orchestrator | 2025-07-06 20:19:28 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:28.389631 | orchestrator | 2025-07-06 20:19:28 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:28.391587 | orchestrator | 2025-07-06 20:19:28 | INFO  | Task 62424c6f-edf0-4359-9131-41391021e999 is in state STARTED 2025-07-06 20:19:28.391661 | orchestrator | 2025-07-06 20:19:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:31.450687 | orchestrator | 2025-07-06 20:19:31 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:31.452641 | orchestrator | 2025-07-06 20:19:31 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:31.454624 | orchestrator | 2025-07-06 20:19:31 | INFO  | Task 62424c6f-edf0-4359-9131-41391021e999 is in state STARTED 2025-07-06 20:19:31.454693 | orchestrator | 2025-07-06 20:19:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:34.511529 | orchestrator | 2025-07-06 20:19:34 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:34.513697 | orchestrator | 2025-07-06 20:19:34 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:34.514957 | orchestrator | 2025-07-06 20:19:34 | INFO  | Task 62424c6f-edf0-4359-9131-41391021e999 is in state STARTED 2025-07-06 20:19:34.515278 | orchestrator | 2025-07-06 20:19:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:37.563221 | orchestrator | 2025-07-06 20:19:37 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:37.563310 | orchestrator | 2025-07-06 20:19:37 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:37.564075 | orchestrator | 2025-07-06 20:19:37 | INFO  | Task 62424c6f-edf0-4359-9131-41391021e999 is in state STARTED 2025-07-06 20:19:37.564093 | orchestrator | 2025-07-06 20:19:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:40.614576 | orchestrator | 2025-07-06 20:19:40 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:40.614673 | orchestrator | 2025-07-06 20:19:40 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:40.614977 | orchestrator | 2025-07-06 20:19:40 | INFO  | Task 62424c6f-edf0-4359-9131-41391021e999 is in state STARTED 2025-07-06 20:19:40.615001 | orchestrator | 2025-07-06 20:19:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:43.663212 | orchestrator | 2025-07-06 20:19:43 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:43.664193 | orchestrator | 2025-07-06 20:19:43 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:43.666315 | orchestrator | 2025-07-06 20:19:43 | INFO  | Task 62424c6f-edf0-4359-9131-41391021e999 is in state STARTED 2025-07-06 20:19:43.666400 | orchestrator | 2025-07-06 20:19:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:46.710096 | orchestrator | 2025-07-06 20:19:46 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:46.712370 | orchestrator | 2025-07-06 20:19:46 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:46.713787 | orchestrator | 2025-07-06 20:19:46 | INFO  | Task 62424c6f-edf0-4359-9131-41391021e999 is in state STARTED 2025-07-06 20:19:46.713818 | orchestrator | 2025-07-06 20:19:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:49.750718 | orchestrator | 2025-07-06 20:19:49 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:49.752208 | orchestrator | 2025-07-06 20:19:49 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:49.753318 | orchestrator | 2025-07-06 20:19:49 | INFO  | Task 62424c6f-edf0-4359-9131-41391021e999 is in state SUCCESS 2025-07-06 20:19:49.754705 | orchestrator | 2025-07-06 20:19:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:52.795825 | orchestrator | 2025-07-06 20:19:52 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:52.797196 | orchestrator | 2025-07-06 20:19:52 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state STARTED 2025-07-06 20:19:52.798712 | orchestrator | 2025-07-06 20:19:52 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:19:52.798962 | orchestrator | 2025-07-06 20:19:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:55.838086 | orchestrator | 2025-07-06 20:19:55 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:55.838219 | orchestrator | 2025-07-06 20:19:55 | INFO  | Task af09afdf-08ee-41da-8935-aab3c666249c is in state SUCCESS 2025-07-06 20:19:55.839731 | orchestrator | 2025-07-06 20:19:55.839953 | orchestrator | 2025-07-06 20:19:55.839969 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-07-06 20:19:55.839982 | orchestrator | 2025-07-06 20:19:55.839994 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-07-06 20:19:55.840005 | orchestrator | Sunday 06 July 2025 20:19:24 +0000 (0:00:00.158) 0:00:00.158 *********** 2025-07-06 20:19:55.840016 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-07-06 20:19:55.840030 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-06 20:19:55.840042 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-06 20:19:55.840053 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:19:55.840064 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-06 20:19:55.840075 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-07-06 20:19:55.840086 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-07-06 20:19:55.840097 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-07-06 20:19:55.840108 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-07-06 20:19:55.840158 | orchestrator | 2025-07-06 20:19:55.840170 | orchestrator | TASK [Create share directory] ************************************************** 2025-07-06 20:19:55.840181 | orchestrator | Sunday 06 July 2025 20:19:29 +0000 (0:00:04.406) 0:00:04.565 *********** 2025-07-06 20:19:55.840289 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-06 20:19:55.840305 | orchestrator | 2025-07-06 20:19:55.840317 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-07-06 20:19:55.840328 | orchestrator | Sunday 06 July 2025 20:19:30 +0000 (0:00:00.954) 0:00:05.519 *********** 2025-07-06 20:19:55.840339 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-07-06 20:19:55.840376 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-06 20:19:55.840411 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-06 20:19:55.840422 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:19:55.840433 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-06 20:19:55.840444 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-07-06 20:19:55.840454 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-07-06 20:19:55.840465 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-07-06 20:19:55.840476 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-07-06 20:19:55.840487 | orchestrator | 2025-07-06 20:19:55.840498 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-07-06 20:19:55.840508 | orchestrator | Sunday 06 July 2025 20:19:43 +0000 (0:00:12.952) 0:00:18.472 *********** 2025-07-06 20:19:55.840520 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-07-06 20:19:55.840531 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-06 20:19:55.840542 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-06 20:19:55.840552 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:19:55.840563 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-06 20:19:55.840573 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-07-06 20:19:55.840584 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-07-06 20:19:55.840595 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-07-06 20:19:55.840606 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-07-06 20:19:55.840617 | orchestrator | 2025-07-06 20:19:55.840627 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:19:55.840638 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:19:55.840651 | orchestrator | 2025-07-06 20:19:55.840662 | orchestrator | 2025-07-06 20:19:55.840672 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:19:55.840683 | orchestrator | Sunday 06 July 2025 20:19:49 +0000 (0:00:05.983) 0:00:24.455 *********** 2025-07-06 20:19:55.840694 | orchestrator | =============================================================================== 2025-07-06 20:19:55.840705 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.95s 2025-07-06 20:19:55.840715 | orchestrator | Write ceph keys to the configuration directory -------------------------- 5.98s 2025-07-06 20:19:55.840736 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.41s 2025-07-06 20:19:55.840748 | orchestrator | Create share directory -------------------------------------------------- 0.95s 2025-07-06 20:19:55.840759 | orchestrator | 2025-07-06 20:19:55.840769 | orchestrator | 2025-07-06 20:19:55.840780 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:19:55.840791 | orchestrator | 2025-07-06 20:19:55.840814 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:19:55.840825 | orchestrator | Sunday 06 July 2025 20:18:10 +0000 (0:00:00.231) 0:00:00.231 *********** 2025-07-06 20:19:55.840836 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.840847 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.840858 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.840869 | orchestrator | 2025-07-06 20:19:55.840880 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:19:55.840890 | orchestrator | Sunday 06 July 2025 20:18:10 +0000 (0:00:00.254) 0:00:00.486 *********** 2025-07-06 20:19:55.840911 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-07-06 20:19:55.840925 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-07-06 20:19:55.840961 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-07-06 20:19:55.840974 | orchestrator | 2025-07-06 20:19:55.840987 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-07-06 20:19:55.841000 | orchestrator | 2025-07-06 20:19:55.841013 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-06 20:19:55.841026 | orchestrator | Sunday 06 July 2025 20:18:11 +0000 (0:00:00.334) 0:00:00.821 *********** 2025-07-06 20:19:55.841037 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:19:55.841048 | orchestrator | 2025-07-06 20:19:55.841058 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-07-06 20:19:55.841070 | orchestrator | Sunday 06 July 2025 20:18:11 +0000 (0:00:00.433) 0:00:01.254 *********** 2025-07-06 20:19:55.841095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:19:55.841201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:19:55.841243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:19:55.841265 | orchestrator | 2025-07-06 20:19:55.841284 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-07-06 20:19:55.841302 | orchestrator | Sunday 06 July 2025 20:18:12 +0000 (0:00:01.040) 0:00:02.295 *********** 2025-07-06 20:19:55.841320 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.841361 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.841396 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.841414 | orchestrator | 2025-07-06 20:19:55.841425 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-06 20:19:55.841455 | orchestrator | Sunday 06 July 2025 20:18:13 +0000 (0:00:00.364) 0:00:02.660 *********** 2025-07-06 20:19:55.841467 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-06 20:19:55.841478 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-06 20:19:55.841497 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-07-06 20:19:55.841509 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-07-06 20:19:55.841519 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-07-06 20:19:55.841530 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-07-06 20:19:55.841555 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-07-06 20:19:55.841566 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-07-06 20:19:55.841577 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-06 20:19:55.841587 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-06 20:19:55.841598 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-07-06 20:19:55.841608 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-07-06 20:19:55.841619 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-07-06 20:19:55.841630 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-07-06 20:19:55.841640 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-07-06 20:19:55.841651 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-07-06 20:19:55.841661 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-06 20:19:55.841672 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-06 20:19:55.841683 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-07-06 20:19:55.841693 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-07-06 20:19:55.841704 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-07-06 20:19:55.841714 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-07-06 20:19:55.841725 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-07-06 20:19:55.841735 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-07-06 20:19:55.841747 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-07-06 20:19:55.841761 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-07-06 20:19:55.841772 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-07-06 20:19:55.841782 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-07-06 20:19:55.841793 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-07-06 20:19:55.841804 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-07-06 20:19:55.841814 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-07-06 20:19:55.841834 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-07-06 20:19:55.841845 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-07-06 20:19:55.841855 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-07-06 20:19:55.841866 | orchestrator | 2025-07-06 20:19:55.841877 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:19:55.841888 | orchestrator | Sunday 06 July 2025 20:18:13 +0000 (0:00:00.680) 0:00:03.341 *********** 2025-07-06 20:19:55.841898 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.841909 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.841920 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.841941 | orchestrator | 2025-07-06 20:19:55.841953 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:19:55.841968 | orchestrator | Sunday 06 July 2025 20:18:14 +0000 (0:00:00.267) 0:00:03.609 *********** 2025-07-06 20:19:55.841979 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.841990 | orchestrator | 2025-07-06 20:19:55.842013 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:19:55.842088 | orchestrator | Sunday 06 July 2025 20:18:14 +0000 (0:00:00.132) 0:00:03.741 *********** 2025-07-06 20:19:55.842100 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.842111 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.842146 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.842157 | orchestrator | 2025-07-06 20:19:55.842168 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:19:55.842179 | orchestrator | Sunday 06 July 2025 20:18:14 +0000 (0:00:00.362) 0:00:04.103 *********** 2025-07-06 20:19:55.842190 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.842201 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.842212 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.842222 | orchestrator | 2025-07-06 20:19:55.842233 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:19:55.842244 | orchestrator | Sunday 06 July 2025 20:18:14 +0000 (0:00:00.277) 0:00:04.381 *********** 2025-07-06 20:19:55.842255 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.842265 | orchestrator | 2025-07-06 20:19:55.842276 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:19:55.842287 | orchestrator | Sunday 06 July 2025 20:18:14 +0000 (0:00:00.125) 0:00:04.506 *********** 2025-07-06 20:19:55.842298 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.842309 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.842319 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.842330 | orchestrator | 2025-07-06 20:19:55.842341 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:19:55.842366 | orchestrator | Sunday 06 July 2025 20:18:15 +0000 (0:00:00.261) 0:00:04.768 *********** 2025-07-06 20:19:55.842377 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.842399 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.842410 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.842421 | orchestrator | 2025-07-06 20:19:55.842432 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:19:55.842443 | orchestrator | Sunday 06 July 2025 20:18:15 +0000 (0:00:00.264) 0:00:05.032 *********** 2025-07-06 20:19:55.842454 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.842464 | orchestrator | 2025-07-06 20:19:55.842475 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:19:55.842486 | orchestrator | Sunday 06 July 2025 20:18:15 +0000 (0:00:00.240) 0:00:05.272 *********** 2025-07-06 20:19:55.842507 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.842518 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.842529 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.842539 | orchestrator | 2025-07-06 20:19:55.842550 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:19:55.842561 | orchestrator | Sunday 06 July 2025 20:18:16 +0000 (0:00:00.286) 0:00:05.558 *********** 2025-07-06 20:19:55.842572 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.842583 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.842594 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.842605 | orchestrator | 2025-07-06 20:19:55.842616 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:19:55.842627 | orchestrator | Sunday 06 July 2025 20:18:16 +0000 (0:00:00.286) 0:00:05.845 *********** 2025-07-06 20:19:55.842637 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.842648 | orchestrator | 2025-07-06 20:19:55.842659 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:19:55.842670 | orchestrator | Sunday 06 July 2025 20:18:16 +0000 (0:00:00.113) 0:00:05.959 *********** 2025-07-06 20:19:55.842680 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.842691 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.842702 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.842713 | orchestrator | 2025-07-06 20:19:55.842723 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:19:55.842750 | orchestrator | Sunday 06 July 2025 20:18:16 +0000 (0:00:00.268) 0:00:06.228 *********** 2025-07-06 20:19:55.842761 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.842772 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.842783 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.842793 | orchestrator | 2025-07-06 20:19:55.842804 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:19:55.842815 | orchestrator | Sunday 06 July 2025 20:18:17 +0000 (0:00:00.424) 0:00:06.652 *********** 2025-07-06 20:19:55.842826 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.842837 | orchestrator | 2025-07-06 20:19:55.842848 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:19:55.842858 | orchestrator | Sunday 06 July 2025 20:18:17 +0000 (0:00:00.113) 0:00:06.765 *********** 2025-07-06 20:19:55.842869 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.842880 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.842891 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.842901 | orchestrator | 2025-07-06 20:19:55.842912 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:19:55.842923 | orchestrator | Sunday 06 July 2025 20:18:17 +0000 (0:00:00.266) 0:00:07.032 *********** 2025-07-06 20:19:55.842934 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.842945 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.842955 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.842966 | orchestrator | 2025-07-06 20:19:55.842977 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:19:55.843000 | orchestrator | Sunday 06 July 2025 20:18:17 +0000 (0:00:00.270) 0:00:07.302 *********** 2025-07-06 20:19:55.843010 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.843021 | orchestrator | 2025-07-06 20:19:55.843043 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:19:55.843055 | orchestrator | Sunday 06 July 2025 20:18:17 +0000 (0:00:00.121) 0:00:07.423 *********** 2025-07-06 20:19:55.843065 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.843076 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.843087 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.843098 | orchestrator | 2025-07-06 20:19:55.843114 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:19:55.843173 | orchestrator | Sunday 06 July 2025 20:18:18 +0000 (0:00:00.367) 0:00:07.790 *********** 2025-07-06 20:19:55.843192 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.843203 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.843214 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.843225 | orchestrator | 2025-07-06 20:19:55.843242 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:19:55.843253 | orchestrator | Sunday 06 July 2025 20:18:18 +0000 (0:00:00.275) 0:00:08.066 *********** 2025-07-06 20:19:55.843264 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.843275 | orchestrator | 2025-07-06 20:19:55.843285 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:19:55.843296 | orchestrator | Sunday 06 July 2025 20:18:18 +0000 (0:00:00.116) 0:00:08.182 *********** 2025-07-06 20:19:55.843307 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.843318 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.843328 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.843339 | orchestrator | 2025-07-06 20:19:55.843350 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:19:55.843360 | orchestrator | Sunday 06 July 2025 20:18:18 +0000 (0:00:00.255) 0:00:08.438 *********** 2025-07-06 20:19:55.843371 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.843382 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.843393 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.843403 | orchestrator | 2025-07-06 20:19:55.843414 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:19:55.843424 | orchestrator | Sunday 06 July 2025 20:18:19 +0000 (0:00:00.273) 0:00:08.712 *********** 2025-07-06 20:19:55.843435 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.843446 | orchestrator | 2025-07-06 20:19:55.843457 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:19:55.843467 | orchestrator | Sunday 06 July 2025 20:18:19 +0000 (0:00:00.102) 0:00:08.814 *********** 2025-07-06 20:19:55.843478 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.843489 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.843499 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.843510 | orchestrator | 2025-07-06 20:19:55.843520 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:19:55.843531 | orchestrator | Sunday 06 July 2025 20:18:19 +0000 (0:00:00.363) 0:00:09.178 *********** 2025-07-06 20:19:55.843542 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.843552 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.843563 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.843574 | orchestrator | 2025-07-06 20:19:55.843584 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:19:55.843595 | orchestrator | Sunday 06 July 2025 20:18:19 +0000 (0:00:00.282) 0:00:09.461 *********** 2025-07-06 20:19:55.843606 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.843616 | orchestrator | 2025-07-06 20:19:55.843627 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:19:55.843638 | orchestrator | Sunday 06 July 2025 20:18:20 +0000 (0:00:00.116) 0:00:09.577 *********** 2025-07-06 20:19:55.843648 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.843657 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.843667 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.843676 | orchestrator | 2025-07-06 20:19:55.843686 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-06 20:19:55.843695 | orchestrator | Sunday 06 July 2025 20:18:20 +0000 (0:00:00.269) 0:00:09.846 *********** 2025-07-06 20:19:55.843705 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:19:55.843714 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:19:55.843724 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:19:55.843733 | orchestrator | 2025-07-06 20:19:55.843743 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-06 20:19:55.843752 | orchestrator | Sunday 06 July 2025 20:18:20 +0000 (0:00:00.436) 0:00:10.283 *********** 2025-07-06 20:19:55.843761 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.843777 | orchestrator | 2025-07-06 20:19:55.843787 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-06 20:19:55.843797 | orchestrator | Sunday 06 July 2025 20:18:20 +0000 (0:00:00.123) 0:00:10.407 *********** 2025-07-06 20:19:55.843806 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.843816 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.843825 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.843835 | orchestrator | 2025-07-06 20:19:55.843844 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-07-06 20:19:55.843854 | orchestrator | Sunday 06 July 2025 20:18:21 +0000 (0:00:00.273) 0:00:10.680 *********** 2025-07-06 20:19:55.843863 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:55.843872 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:19:55.843882 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:19:55.843891 | orchestrator | 2025-07-06 20:19:55.843901 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-07-06 20:19:55.843910 | orchestrator | Sunday 06 July 2025 20:18:22 +0000 (0:00:01.535) 0:00:12.216 *********** 2025-07-06 20:19:55.843920 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-06 20:19:55.843929 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-06 20:19:55.843939 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-06 20:19:55.843948 | orchestrator | 2025-07-06 20:19:55.843958 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-07-06 20:19:55.843967 | orchestrator | Sunday 06 July 2025 20:18:24 +0000 (0:00:01.638) 0:00:13.854 *********** 2025-07-06 20:19:55.843977 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-06 20:19:55.843986 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-06 20:19:55.844006 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-06 20:19:55.844016 | orchestrator | 2025-07-06 20:19:55.844025 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-07-06 20:19:55.844035 | orchestrator | Sunday 06 July 2025 20:18:26 +0000 (0:00:02.076) 0:00:15.930 *********** 2025-07-06 20:19:55.844049 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-06 20:19:55.844059 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-06 20:19:55.844069 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-06 20:19:55.844078 | orchestrator | 2025-07-06 20:19:55.844088 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-07-06 20:19:55.844097 | orchestrator | Sunday 06 July 2025 20:18:28 +0000 (0:00:01.736) 0:00:17.667 *********** 2025-07-06 20:19:55.844107 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.844116 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.844168 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.844178 | orchestrator | 2025-07-06 20:19:55.844187 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-07-06 20:19:55.844197 | orchestrator | Sunday 06 July 2025 20:18:28 +0000 (0:00:00.258) 0:00:17.925 *********** 2025-07-06 20:19:55.844206 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.844216 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.844225 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.844235 | orchestrator | 2025-07-06 20:19:55.844244 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-06 20:19:55.844254 | orchestrator | Sunday 06 July 2025 20:18:28 +0000 (0:00:00.240) 0:00:18.166 *********** 2025-07-06 20:19:55.844263 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:19:55.844280 | orchestrator | 2025-07-06 20:19:55.844289 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-07-06 20:19:55.844299 | orchestrator | Sunday 06 July 2025 20:18:29 +0000 (0:00:00.697) 0:00:18.863 *********** 2025-07-06 20:19:55.844310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:19:55.844335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:19:55.844350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:19:55.844359 | orchestrator | 2025-07-06 20:19:55.844367 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-07-06 20:19:55.844375 | orchestrator | Sunday 06 July 2025 20:18:30 +0000 (0:00:01.503) 0:00:20.367 *********** 2025-07-06 20:19:55.844394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:19:55.844408 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.844421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:19:55.844435 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.844443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:19:55.844457 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.844464 | orchestrator | 2025-07-06 20:19:55.844472 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-07-06 20:19:55.844480 | orchestrator | Sunday 06 July 2025 20:18:31 +0000 (0:00:00.583) 0:00:20.951 *********** 2025-07-06 20:19:55.844499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:19:55.844508 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.844524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:19:55.844532 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.844551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-06 20:19:55.844566 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.844573 | orchestrator | 2025-07-06 20:19:55.844581 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-07-06 20:19:55.844589 | orchestrator | Sunday 06 July 2025 20:18:32 +0000 (0:00:00.871) 0:00:21.822 *********** 2025-07-06 20:19:55.844597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:19:55.844617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:19:55.844633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-06 20:19:55.844641 | orchestrator | 2025-07-06 20:19:55.844649 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-06 20:19:55.844657 | orchestrator | Sunday 06 July 2025 20:18:33 +0000 (0:00:01.123) 0:00:22.946 *********** 2025-07-06 20:19:55.844665 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:19:55.844672 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:19:55.844680 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:19:55.844688 | orchestrator | 2025-07-06 20:19:55.844696 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-06 20:19:55.844704 | orchestrator | Sunday 06 July 2025 20:18:33 +0000 (0:00:00.264) 0:00:23.210 *********** 2025-07-06 20:19:55.844712 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:19:55.844720 | orchestrator | 2025-07-06 20:19:55.844728 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-07-06 20:19:55.844741 | orchestrator | Sunday 06 July 2025 20:18:34 +0000 (0:00:00.719) 0:00:23.929 *********** 2025-07-06 20:19:55.844749 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:55.844756 | orchestrator | 2025-07-06 20:19:55.844768 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-07-06 20:19:55.844776 | orchestrator | Sunday 06 July 2025 20:18:36 +0000 (0:00:02.167) 0:00:26.097 *********** 2025-07-06 20:19:55.844784 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:55.844792 | orchestrator | 2025-07-06 20:19:55.844800 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-07-06 20:19:55.844808 | orchestrator | Sunday 06 July 2025 20:18:38 +0000 (0:00:02.146) 0:00:28.243 *********** 2025-07-06 20:19:55.844815 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:55.844823 | orchestrator | 2025-07-06 20:19:55.844831 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-06 20:19:55.844839 | orchestrator | Sunday 06 July 2025 20:18:53 +0000 (0:00:15.307) 0:00:43.551 *********** 2025-07-06 20:19:55.844847 | orchestrator | 2025-07-06 20:19:55.844855 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-06 20:19:55.844862 | orchestrator | Sunday 06 July 2025 20:18:54 +0000 (0:00:00.066) 0:00:43.617 *********** 2025-07-06 20:19:55.844870 | orchestrator | 2025-07-06 20:19:55.844878 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-06 20:19:55.844886 | orchestrator | Sunday 06 July 2025 20:18:54 +0000 (0:00:00.064) 0:00:43.681 *********** 2025-07-06 20:19:55.844894 | orchestrator | 2025-07-06 20:19:55.844901 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-07-06 20:19:55.844909 | orchestrator | Sunday 06 July 2025 20:18:54 +0000 (0:00:00.065) 0:00:43.747 *********** 2025-07-06 20:19:55.844917 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:19:55.844925 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:19:55.844932 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:19:55.844940 | orchestrator | 2025-07-06 20:19:55.844948 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:19:55.844956 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-07-06 20:19:55.844999 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-06 20:19:55.845008 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-06 20:19:55.845016 | orchestrator | 2025-07-06 20:19:55.845024 | orchestrator | 2025-07-06 20:19:55.845032 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:19:55.845040 | orchestrator | Sunday 06 July 2025 20:19:53 +0000 (0:00:59.053) 0:01:42.801 *********** 2025-07-06 20:19:55.845047 | orchestrator | =============================================================================== 2025-07-06 20:19:55.845055 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.05s 2025-07-06 20:19:55.845063 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.31s 2025-07-06 20:19:55.845071 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.17s 2025-07-06 20:19:55.845079 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.15s 2025-07-06 20:19:55.845086 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.08s 2025-07-06 20:19:55.845094 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.74s 2025-07-06 20:19:55.845102 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.64s 2025-07-06 20:19:55.845110 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.54s 2025-07-06 20:19:55.845131 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.50s 2025-07-06 20:19:55.845145 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.12s 2025-07-06 20:19:55.845153 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.04s 2025-07-06 20:19:55.845161 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.87s 2025-07-06 20:19:55.845169 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.72s 2025-07-06 20:19:55.845177 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-07-06 20:19:55.845184 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2025-07-06 20:19:55.845192 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.58s 2025-07-06 20:19:55.845200 | orchestrator | horizon : Update policy file name --------------------------------------- 0.44s 2025-07-06 20:19:55.845207 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.43s 2025-07-06 20:19:55.845215 | orchestrator | horizon : Update policy file name --------------------------------------- 0.42s 2025-07-06 20:19:55.845223 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.37s 2025-07-06 20:19:55.845231 | orchestrator | 2025-07-06 20:19:55 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:19:55.845239 | orchestrator | 2025-07-06 20:19:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:19:58.883772 | orchestrator | 2025-07-06 20:19:58 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:19:58.885206 | orchestrator | 2025-07-06 20:19:58 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:19:58.885245 | orchestrator | 2025-07-06 20:19:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:01.923217 | orchestrator | 2025-07-06 20:20:01 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:01.925176 | orchestrator | 2025-07-06 20:20:01 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:01.925265 | orchestrator | 2025-07-06 20:20:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:04.961426 | orchestrator | 2025-07-06 20:20:04 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:04.963150 | orchestrator | 2025-07-06 20:20:04 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:04.963185 | orchestrator | 2025-07-06 20:20:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:08.004866 | orchestrator | 2025-07-06 20:20:08 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:08.005276 | orchestrator | 2025-07-06 20:20:08 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:08.005316 | orchestrator | 2025-07-06 20:20:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:11.036581 | orchestrator | 2025-07-06 20:20:11 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:11.038784 | orchestrator | 2025-07-06 20:20:11 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:11.038837 | orchestrator | 2025-07-06 20:20:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:14.087852 | orchestrator | 2025-07-06 20:20:14 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:14.089696 | orchestrator | 2025-07-06 20:20:14 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:14.089762 | orchestrator | 2025-07-06 20:20:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:17.135006 | orchestrator | 2025-07-06 20:20:17 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:17.137357 | orchestrator | 2025-07-06 20:20:17 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:17.137415 | orchestrator | 2025-07-06 20:20:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:20.178970 | orchestrator | 2025-07-06 20:20:20 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:20.180528 | orchestrator | 2025-07-06 20:20:20 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:20.180568 | orchestrator | 2025-07-06 20:20:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:23.227399 | orchestrator | 2025-07-06 20:20:23 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:23.229276 | orchestrator | 2025-07-06 20:20:23 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:23.229573 | orchestrator | 2025-07-06 20:20:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:26.272212 | orchestrator | 2025-07-06 20:20:26 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:26.273780 | orchestrator | 2025-07-06 20:20:26 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:26.273818 | orchestrator | 2025-07-06 20:20:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:29.319752 | orchestrator | 2025-07-06 20:20:29 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:29.322230 | orchestrator | 2025-07-06 20:20:29 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:29.322263 | orchestrator | 2025-07-06 20:20:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:32.363459 | orchestrator | 2025-07-06 20:20:32 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:32.364676 | orchestrator | 2025-07-06 20:20:32 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:32.364707 | orchestrator | 2025-07-06 20:20:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:35.398472 | orchestrator | 2025-07-06 20:20:35 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:35.401479 | orchestrator | 2025-07-06 20:20:35 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:35.401520 | orchestrator | 2025-07-06 20:20:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:38.439221 | orchestrator | 2025-07-06 20:20:38 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:38.439306 | orchestrator | 2025-07-06 20:20:38 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:38.439318 | orchestrator | 2025-07-06 20:20:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:41.486420 | orchestrator | 2025-07-06 20:20:41 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state STARTED 2025-07-06 20:20:41.487331 | orchestrator | 2025-07-06 20:20:41 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state STARTED 2025-07-06 20:20:41.487744 | orchestrator | 2025-07-06 20:20:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:44.522399 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:20:44.524301 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:20:44.525997 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task c88be4ea-6643-4e26-94d2-003b1142d99b is in state SUCCESS 2025-07-06 20:20:44.527390 | orchestrator | 2025-07-06 20:20:44.527423 | orchestrator | 2025-07-06 20:20:44.527435 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:20:44.527448 | orchestrator | 2025-07-06 20:20:44.527459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:20:44.527471 | orchestrator | Sunday 06 July 2025 20:18:10 +0000 (0:00:00.233) 0:00:00.233 *********** 2025-07-06 20:20:44.527482 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:20:44.527495 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:20:44.527505 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:20:44.527516 | orchestrator | 2025-07-06 20:20:44.527527 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:20:44.527538 | orchestrator | Sunday 06 July 2025 20:18:10 +0000 (0:00:00.267) 0:00:00.501 *********** 2025-07-06 20:20:44.527549 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-06 20:20:44.527560 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-06 20:20:44.527571 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-06 20:20:44.527581 | orchestrator | 2025-07-06 20:20:44.527592 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-07-06 20:20:44.527603 | orchestrator | 2025-07-06 20:20:44.527614 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:20:44.527625 | orchestrator | Sunday 06 July 2025 20:18:11 +0000 (0:00:00.348) 0:00:00.849 *********** 2025-07-06 20:20:44.527636 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:20:44.527647 | orchestrator | 2025-07-06 20:20:44.527658 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-07-06 20:20:44.527669 | orchestrator | Sunday 06 July 2025 20:18:11 +0000 (0:00:00.477) 0:00:01.327 *********** 2025-07-06 20:20:44.527686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.527719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.527760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.527775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.527788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.527800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.527812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.527830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.527849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.527861 | orchestrator | 2025-07-06 20:20:44.527872 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-07-06 20:20:44.528919 | orchestrator | Sunday 06 July 2025 20:18:13 +0000 (0:00:01.668) 0:00:02.995 *********** 2025-07-06 20:20:44.528958 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-07-06 20:20:44.528970 | orchestrator | 2025-07-06 20:20:44.528981 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-07-06 20:20:44.528994 | orchestrator | Sunday 06 July 2025 20:18:14 +0000 (0:00:00.771) 0:00:03.766 *********** 2025-07-06 20:20:44.529005 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:20:44.529017 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:20:44.529028 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:20:44.529039 | orchestrator | 2025-07-06 20:20:44.529050 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-07-06 20:20:44.529061 | orchestrator | Sunday 06 July 2025 20:18:14 +0000 (0:00:00.376) 0:00:04.142 *********** 2025-07-06 20:20:44.529071 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:20:44.529083 | orchestrator | 2025-07-06 20:20:44.529094 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:20:44.529105 | orchestrator | Sunday 06 July 2025 20:18:15 +0000 (0:00:00.606) 0:00:04.749 *********** 2025-07-06 20:20:44.529139 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:20:44.529150 | orchestrator | 2025-07-06 20:20:44.529161 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-07-06 20:20:44.529172 | orchestrator | Sunday 06 July 2025 20:18:15 +0000 (0:00:00.476) 0:00:05.225 *********** 2025-07-06 20:20:44.529185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.529206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.529232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.529253 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529337 | orchestrator | 2025-07-06 20:20:44.529348 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-07-06 20:20:44.529359 | orchestrator | Sunday 06 July 2025 20:18:18 +0000 (0:00:03.279) 0:00:08.504 *********** 2025-07-06 20:20:44.529378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:20:44.529391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.529403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:20:44.529414 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.529431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:20:44.529455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.529473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:20:44.529484 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:44.529498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:20:44.529512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.529526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:20:44.529545 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:44.529557 | orchestrator | 2025-07-06 20:20:44.529569 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-07-06 20:20:44.529582 | orchestrator | Sunday 06 July 2025 20:18:19 +0000 (0:00:00.522) 0:00:09.026 *********** 2025-07-06 20:20:44.529599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:20:44.529620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.529634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:20:44.529646 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.529660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:20:44.529680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.529698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:20:44.529712 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:44.529725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-06 20:20:44.529745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.529759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-06 20:20:44.529772 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:44.529784 | orchestrator | 2025-07-06 20:20:44.529797 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-07-06 20:20:44.529809 | orchestrator | Sunday 06 July 2025 20:18:20 +0000 (0:00:00.668) 0:00:09.695 *********** 2025-07-06 20:20:44.529831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.529851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.529871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.529883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.529961 | orchestrator | 2025-07-06 20:20:44.529972 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-07-06 20:20:44.529984 | orchestrator | Sunday 06 July 2025 20:18:23 +0000 (0:00:03.312) 0:00:13.008 *********** 2025-07-06 20:20:44.530002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.530080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.530096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.530168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.530190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.530202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.530221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.530232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.530244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.530253 | orchestrator | 2025-07-06 20:20:44.530263 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-07-06 20:20:44.530273 | orchestrator | Sunday 06 July 2025 20:18:28 +0000 (0:00:04.694) 0:00:17.703 *********** 2025-07-06 20:20:44.530283 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:44.530293 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:20:44.530303 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:20:44.530312 | orchestrator | 2025-07-06 20:20:44.530326 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-07-06 20:20:44.530336 | orchestrator | Sunday 06 July 2025 20:18:29 +0000 (0:00:01.351) 0:00:19.054 *********** 2025-07-06 20:20:44.530346 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.530356 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:44.530365 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:44.530375 | orchestrator | 2025-07-06 20:20:44.530385 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-07-06 20:20:44.530394 | orchestrator | Sunday 06 July 2025 20:18:30 +0000 (0:00:00.538) 0:00:19.593 *********** 2025-07-06 20:20:44.530404 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.530414 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:44.530423 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:44.530433 | orchestrator | 2025-07-06 20:20:44.530442 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-07-06 20:20:44.530452 | orchestrator | Sunday 06 July 2025 20:18:30 +0000 (0:00:00.386) 0:00:19.979 *********** 2025-07-06 20:20:44.530461 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.530471 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:44.530480 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:44.530490 | orchestrator | 2025-07-06 20:20:44.530499 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-07-06 20:20:44.530509 | orchestrator | Sunday 06 July 2025 20:18:30 +0000 (0:00:00.258) 0:00:20.238 *********** 2025-07-06 20:20:44.530527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.530544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.530555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.530570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.530581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.530605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-06 20:20:44.530616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.530626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.530636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.530646 | orchestrator | 2025-07-06 20:20:44.530656 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:20:44.530666 | orchestrator | Sunday 06 July 2025 20:18:33 +0000 (0:00:02.456) 0:00:22.695 *********** 2025-07-06 20:20:44.530676 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.530686 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:44.530695 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:44.530705 | orchestrator | 2025-07-06 20:20:44.530714 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-07-06 20:20:44.530724 | orchestrator | Sunday 06 July 2025 20:18:33 +0000 (0:00:00.280) 0:00:22.976 *********** 2025-07-06 20:20:44.530738 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-06 20:20:44.530748 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-06 20:20:44.530758 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-06 20:20:44.530768 | orchestrator | 2025-07-06 20:20:44.530777 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-07-06 20:20:44.530787 | orchestrator | Sunday 06 July 2025 20:18:35 +0000 (0:00:01.876) 0:00:24.852 *********** 2025-07-06 20:20:44.530803 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:20:44.530812 | orchestrator | 2025-07-06 20:20:44.530822 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-07-06 20:20:44.530831 | orchestrator | Sunday 06 July 2025 20:18:36 +0000 (0:00:00.942) 0:00:25.795 *********** 2025-07-06 20:20:44.530841 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.530850 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:44.530860 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:44.530870 | orchestrator | 2025-07-06 20:20:44.530879 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-07-06 20:20:44.530889 | orchestrator | Sunday 06 July 2025 20:18:36 +0000 (0:00:00.569) 0:00:26.365 *********** 2025-07-06 20:20:44.530899 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-06 20:20:44.530908 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:20:44.530918 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-06 20:20:44.530927 | orchestrator | 2025-07-06 20:20:44.530937 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-07-06 20:20:44.530946 | orchestrator | Sunday 06 July 2025 20:18:37 +0000 (0:00:01.102) 0:00:27.467 *********** 2025-07-06 20:20:44.530960 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:20:44.530970 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:20:44.530980 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:20:44.530989 | orchestrator | 2025-07-06 20:20:44.530999 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-07-06 20:20:44.531009 | orchestrator | Sunday 06 July 2025 20:18:38 +0000 (0:00:00.321) 0:00:27.789 *********** 2025-07-06 20:20:44.531018 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-06 20:20:44.531028 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-06 20:20:44.531037 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-06 20:20:44.531047 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-06 20:20:44.531057 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-06 20:20:44.531066 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-06 20:20:44.531076 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-06 20:20:44.531086 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-06 20:20:44.531095 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-06 20:20:44.531120 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-06 20:20:44.531130 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-06 20:20:44.531140 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-06 20:20:44.531149 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-06 20:20:44.531159 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-06 20:20:44.531168 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-06 20:20:44.531178 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:20:44.531187 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:20:44.531197 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:20:44.531206 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:20:44.531225 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:20:44.531235 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:20:44.531244 | orchestrator | 2025-07-06 20:20:44.531254 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-07-06 20:20:44.531263 | orchestrator | Sunday 06 July 2025 20:18:47 +0000 (0:00:08.982) 0:00:36.771 *********** 2025-07-06 20:20:44.531273 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:20:44.531282 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:20:44.531291 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:20:44.531301 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:20:44.531315 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:20:44.531324 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:20:44.531334 | orchestrator | 2025-07-06 20:20:44.531343 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-07-06 20:20:44.531353 | orchestrator | Sunday 06 July 2025 20:18:49 +0000 (0:00:02.543) 0:00:39.315 *********** 2025-07-06 20:20:44.531369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.531380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.531391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-06 20:20:44.531408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.531422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.531432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-06 20:20:44.531447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.531458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.531468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-06 20:20:44.531482 | orchestrator | 2025-07-06 20:20:44.531492 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:20:44.531502 | orchestrator | Sunday 06 July 2025 20:18:52 +0000 (0:00:02.311) 0:00:41.626 *********** 2025-07-06 20:20:44.531511 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.531521 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:44.531531 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:44.531540 | orchestrator | 2025-07-06 20:20:44.531549 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-07-06 20:20:44.531559 | orchestrator | Sunday 06 July 2025 20:18:52 +0000 (0:00:00.300) 0:00:41.927 *********** 2025-07-06 20:20:44.531568 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:44.531578 | orchestrator | 2025-07-06 20:20:44.531587 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-07-06 20:20:44.531597 | orchestrator | Sunday 06 July 2025 20:18:54 +0000 (0:00:02.281) 0:00:44.208 *********** 2025-07-06 20:20:44.531607 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:44.531616 | orchestrator | 2025-07-06 20:20:44.531626 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-07-06 20:20:44.531770 | orchestrator | Sunday 06 July 2025 20:18:57 +0000 (0:00:02.538) 0:00:46.747 *********** 2025-07-06 20:20:44.531782 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:20:44.531791 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:20:44.531801 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:20:44.531810 | orchestrator | 2025-07-06 20:20:44.531820 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-07-06 20:20:44.531830 | orchestrator | Sunday 06 July 2025 20:18:58 +0000 (0:00:00.866) 0:00:47.613 *********** 2025-07-06 20:20:44.531839 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:20:44.531849 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:20:44.531863 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:20:44.531873 | orchestrator | 2025-07-06 20:20:44.531883 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-07-06 20:20:44.531892 | orchestrator | Sunday 06 July 2025 20:18:58 +0000 (0:00:00.405) 0:00:48.019 *********** 2025-07-06 20:20:44.531902 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.531911 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:44.531921 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:44.531930 | orchestrator | 2025-07-06 20:20:44.531940 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-07-06 20:20:44.531949 | orchestrator | Sunday 06 July 2025 20:18:58 +0000 (0:00:00.391) 0:00:48.411 *********** 2025-07-06 20:20:44.531959 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:44.531968 | orchestrator | 2025-07-06 20:20:44.531978 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-07-06 20:20:44.531987 | orchestrator | Sunday 06 July 2025 20:19:12 +0000 (0:00:13.567) 0:01:01.978 *********** 2025-07-06 20:20:44.531997 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:44.532006 | orchestrator | 2025-07-06 20:20:44.532016 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-06 20:20:44.532025 | orchestrator | Sunday 06 July 2025 20:19:22 +0000 (0:00:10.402) 0:01:12.381 *********** 2025-07-06 20:20:44.532035 | orchestrator | 2025-07-06 20:20:44.532044 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-06 20:20:44.532054 | orchestrator | Sunday 06 July 2025 20:19:23 +0000 (0:00:00.254) 0:01:12.635 *********** 2025-07-06 20:20:44.532063 | orchestrator | 2025-07-06 20:20:44.532073 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-06 20:20:44.532082 | orchestrator | Sunday 06 July 2025 20:19:23 +0000 (0:00:00.061) 0:01:12.697 *********** 2025-07-06 20:20:44.532100 | orchestrator | 2025-07-06 20:20:44.532169 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-07-06 20:20:44.532180 | orchestrator | Sunday 06 July 2025 20:19:23 +0000 (0:00:00.065) 0:01:12.763 *********** 2025-07-06 20:20:44.532190 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:44.532199 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:20:44.532209 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:20:44.532218 | orchestrator | 2025-07-06 20:20:44.532228 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-07-06 20:20:44.532237 | orchestrator | Sunday 06 July 2025 20:19:40 +0000 (0:00:16.865) 0:01:29.628 *********** 2025-07-06 20:20:44.532247 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:20:44.532257 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:20:44.532266 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:44.532275 | orchestrator | 2025-07-06 20:20:44.532285 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-07-06 20:20:44.532295 | orchestrator | Sunday 06 July 2025 20:19:47 +0000 (0:00:07.624) 0:01:37.253 *********** 2025-07-06 20:20:44.532304 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:20:44.532312 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:20:44.532319 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:44.532327 | orchestrator | 2025-07-06 20:20:44.532335 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:20:44.532343 | orchestrator | Sunday 06 July 2025 20:19:55 +0000 (0:00:07.431) 0:01:44.685 *********** 2025-07-06 20:20:44.532351 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:20:44.532358 | orchestrator | 2025-07-06 20:20:44.532366 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-07-06 20:20:44.532374 | orchestrator | Sunday 06 July 2025 20:19:55 +0000 (0:00:00.625) 0:01:45.310 *********** 2025-07-06 20:20:44.532382 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:20:44.532390 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:20:44.532399 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:20:44.532408 | orchestrator | 2025-07-06 20:20:44.532416 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-07-06 20:20:44.532425 | orchestrator | Sunday 06 July 2025 20:19:56 +0000 (0:00:00.852) 0:01:46.163 *********** 2025-07-06 20:20:44.532434 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:20:44.532443 | orchestrator | 2025-07-06 20:20:44.532452 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-07-06 20:20:44.532461 | orchestrator | Sunday 06 July 2025 20:19:58 +0000 (0:00:01.743) 0:01:47.906 *********** 2025-07-06 20:20:44.532469 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-07-06 20:20:44.532478 | orchestrator | 2025-07-06 20:20:44.532487 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-07-06 20:20:44.532495 | orchestrator | Sunday 06 July 2025 20:20:09 +0000 (0:00:10.888) 0:01:58.795 *********** 2025-07-06 20:20:44.532505 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-07-06 20:20:44.532513 | orchestrator | 2025-07-06 20:20:44.532522 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-07-06 20:20:44.532531 | orchestrator | Sunday 06 July 2025 20:20:31 +0000 (0:00:22.001) 0:02:20.797 *********** 2025-07-06 20:20:44.532540 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-07-06 20:20:44.532549 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-07-06 20:20:44.532557 | orchestrator | 2025-07-06 20:20:44.532566 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-07-06 20:20:44.532575 | orchestrator | Sunday 06 July 2025 20:20:37 +0000 (0:00:06.608) 0:02:27.405 *********** 2025-07-06 20:20:44.532584 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.532593 | orchestrator | 2025-07-06 20:20:44.532601 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-07-06 20:20:44.532615 | orchestrator | Sunday 06 July 2025 20:20:38 +0000 (0:00:00.249) 0:02:27.654 *********** 2025-07-06 20:20:44.532624 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.532633 | orchestrator | 2025-07-06 20:20:44.532642 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-07-06 20:20:44.532655 | orchestrator | Sunday 06 July 2025 20:20:38 +0000 (0:00:00.115) 0:02:27.770 *********** 2025-07-06 20:20:44.532664 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.532673 | orchestrator | 2025-07-06 20:20:44.532682 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-07-06 20:20:44.532691 | orchestrator | Sunday 06 July 2025 20:20:38 +0000 (0:00:00.170) 0:02:27.940 *********** 2025-07-06 20:20:44.532700 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.532708 | orchestrator | 2025-07-06 20:20:44.532717 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-07-06 20:20:44.532726 | orchestrator | Sunday 06 July 2025 20:20:38 +0000 (0:00:00.284) 0:02:28.225 *********** 2025-07-06 20:20:44.532735 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:20:44.532744 | orchestrator | 2025-07-06 20:20:44.532753 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-06 20:20:44.532762 | orchestrator | Sunday 06 July 2025 20:20:42 +0000 (0:00:03.657) 0:02:31.882 *********** 2025-07-06 20:20:44.532771 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:20:44.532780 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:20:44.532788 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:20:44.532796 | orchestrator | 2025-07-06 20:20:44.532804 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:20:44.532813 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-07-06 20:20:44.532822 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-06 20:20:44.532834 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-06 20:20:44.532842 | orchestrator | 2025-07-06 20:20:44.532850 | orchestrator | 2025-07-06 20:20:44.532858 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:20:44.532866 | orchestrator | Sunday 06 July 2025 20:20:42 +0000 (0:00:00.542) 0:02:32.424 *********** 2025-07-06 20:20:44.532874 | orchestrator | =============================================================================== 2025-07-06 20:20:44.532881 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.00s 2025-07-06 20:20:44.532889 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 16.87s 2025-07-06 20:20:44.532897 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.57s 2025-07-06 20:20:44.532905 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.89s 2025-07-06 20:20:44.532912 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.40s 2025-07-06 20:20:44.532920 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.98s 2025-07-06 20:20:44.532928 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.62s 2025-07-06 20:20:44.532936 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.43s 2025-07-06 20:20:44.532943 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.61s 2025-07-06 20:20:44.532951 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.69s 2025-07-06 20:20:44.532959 | orchestrator | keystone : Creating default user role ----------------------------------- 3.66s 2025-07-06 20:20:44.532967 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.31s 2025-07-06 20:20:44.532974 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.28s 2025-07-06 20:20:44.532987 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.54s 2025-07-06 20:20:44.532995 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.54s 2025-07-06 20:20:44.533003 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.46s 2025-07-06 20:20:44.533010 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.31s 2025-07-06 20:20:44.533018 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.28s 2025-07-06 20:20:44.533026 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.88s 2025-07-06 20:20:44.533034 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.74s 2025-07-06 20:20:44.533042 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:20:44.533049 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task bab998db-3240-4343-911c-cb6d86cc74b5 is in state STARTED 2025-07-06 20:20:44.533057 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task 72a9ec03-79a0-4902-9ea3-055208d1bb37 is in state SUCCESS 2025-07-06 20:20:44.533065 | orchestrator | 2025-07-06 20:20:44 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:20:44.533073 | orchestrator | 2025-07-06 20:20:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:47.565904 | orchestrator | 2025-07-06 20:20:47 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:20:47.565974 | orchestrator | 2025-07-06 20:20:47 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:20:47.566465 | orchestrator | 2025-07-06 20:20:47 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:20:47.566899 | orchestrator | 2025-07-06 20:20:47 | INFO  | Task bab998db-3240-4343-911c-cb6d86cc74b5 is in state STARTED 2025-07-06 20:20:47.568320 | orchestrator | 2025-07-06 20:20:47 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:20:47.568368 | orchestrator | 2025-07-06 20:20:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:50.596090 | orchestrator | 2025-07-06 20:20:50 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:20:50.596317 | orchestrator | 2025-07-06 20:20:50 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:20:50.597841 | orchestrator | 2025-07-06 20:20:50 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:20:50.598427 | orchestrator | 2025-07-06 20:20:50 | INFO  | Task bab998db-3240-4343-911c-cb6d86cc74b5 is in state SUCCESS 2025-07-06 20:20:50.598804 | orchestrator | 2025-07-06 20:20:50 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:20:50.599567 | orchestrator | 2025-07-06 20:20:50 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:20:50.599587 | orchestrator | 2025-07-06 20:20:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:53.635250 | orchestrator | 2025-07-06 20:20:53 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:20:53.635677 | orchestrator | 2025-07-06 20:20:53 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:20:53.637221 | orchestrator | 2025-07-06 20:20:53 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:20:53.637818 | orchestrator | 2025-07-06 20:20:53 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:20:53.638886 | orchestrator | 2025-07-06 20:20:53 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:20:53.638937 | orchestrator | 2025-07-06 20:20:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:56.673316 | orchestrator | 2025-07-06 20:20:56 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:20:56.674376 | orchestrator | 2025-07-06 20:20:56 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:20:56.676007 | orchestrator | 2025-07-06 20:20:56 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:20:56.677682 | orchestrator | 2025-07-06 20:20:56 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:20:56.679429 | orchestrator | 2025-07-06 20:20:56 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:20:56.679467 | orchestrator | 2025-07-06 20:20:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:20:59.721403 | orchestrator | 2025-07-06 20:20:59 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:20:59.722755 | orchestrator | 2025-07-06 20:20:59 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:20:59.724471 | orchestrator | 2025-07-06 20:20:59 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:20:59.726690 | orchestrator | 2025-07-06 20:20:59 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:20:59.728527 | orchestrator | 2025-07-06 20:20:59 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:20:59.728933 | orchestrator | 2025-07-06 20:20:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:02.766064 | orchestrator | 2025-07-06 20:21:02 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:02.768393 | orchestrator | 2025-07-06 20:21:02 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:02.769770 | orchestrator | 2025-07-06 20:21:02 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:02.771628 | orchestrator | 2025-07-06 20:21:02 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:02.773491 | orchestrator | 2025-07-06 20:21:02 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:02.773519 | orchestrator | 2025-07-06 20:21:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:05.819741 | orchestrator | 2025-07-06 20:21:05 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:05.821673 | orchestrator | 2025-07-06 20:21:05 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:05.823208 | orchestrator | 2025-07-06 20:21:05 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:05.824564 | orchestrator | 2025-07-06 20:21:05 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:05.825768 | orchestrator | 2025-07-06 20:21:05 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:05.825898 | orchestrator | 2025-07-06 20:21:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:08.866550 | orchestrator | 2025-07-06 20:21:08 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:08.867670 | orchestrator | 2025-07-06 20:21:08 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:08.868586 | orchestrator | 2025-07-06 20:21:08 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:08.871330 | orchestrator | 2025-07-06 20:21:08 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:08.872438 | orchestrator | 2025-07-06 20:21:08 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:08.872464 | orchestrator | 2025-07-06 20:21:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:11.909003 | orchestrator | 2025-07-06 20:21:11 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:11.910680 | orchestrator | 2025-07-06 20:21:11 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:11.910874 | orchestrator | 2025-07-06 20:21:11 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:11.911599 | orchestrator | 2025-07-06 20:21:11 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:11.912511 | orchestrator | 2025-07-06 20:21:11 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:11.912537 | orchestrator | 2025-07-06 20:21:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:14.938739 | orchestrator | 2025-07-06 20:21:14 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:14.938974 | orchestrator | 2025-07-06 20:21:14 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:14.939806 | orchestrator | 2025-07-06 20:21:14 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:14.940440 | orchestrator | 2025-07-06 20:21:14 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:14.943272 | orchestrator | 2025-07-06 20:21:14 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:14.943305 | orchestrator | 2025-07-06 20:21:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:17.976863 | orchestrator | 2025-07-06 20:21:17 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:17.976960 | orchestrator | 2025-07-06 20:21:17 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:17.976974 | orchestrator | 2025-07-06 20:21:17 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:17.976985 | orchestrator | 2025-07-06 20:21:17 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:17.977490 | orchestrator | 2025-07-06 20:21:17 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:17.977524 | orchestrator | 2025-07-06 20:21:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:21.010958 | orchestrator | 2025-07-06 20:21:21 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:21.011315 | orchestrator | 2025-07-06 20:21:21 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:21.012197 | orchestrator | 2025-07-06 20:21:21 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:21.013161 | orchestrator | 2025-07-06 20:21:21 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:21.014309 | orchestrator | 2025-07-06 20:21:21 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:21.014341 | orchestrator | 2025-07-06 20:21:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:24.045175 | orchestrator | 2025-07-06 20:21:24 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:24.045681 | orchestrator | 2025-07-06 20:21:24 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:24.047707 | orchestrator | 2025-07-06 20:21:24 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:24.049601 | orchestrator | 2025-07-06 20:21:24 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:24.049932 | orchestrator | 2025-07-06 20:21:24 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:24.050090 | orchestrator | 2025-07-06 20:21:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:27.088241 | orchestrator | 2025-07-06 20:21:27 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:27.089618 | orchestrator | 2025-07-06 20:21:27 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:27.090619 | orchestrator | 2025-07-06 20:21:27 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:27.091083 | orchestrator | 2025-07-06 20:21:27 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:27.092254 | orchestrator | 2025-07-06 20:21:27 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:27.092335 | orchestrator | 2025-07-06 20:21:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:30.138199 | orchestrator | 2025-07-06 20:21:30 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:30.138773 | orchestrator | 2025-07-06 20:21:30 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:30.139757 | orchestrator | 2025-07-06 20:21:30 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:30.140421 | orchestrator | 2025-07-06 20:21:30 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:30.141204 | orchestrator | 2025-07-06 20:21:30 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:30.141228 | orchestrator | 2025-07-06 20:21:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:33.184411 | orchestrator | 2025-07-06 20:21:33 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:33.187232 | orchestrator | 2025-07-06 20:21:33 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:33.187825 | orchestrator | 2025-07-06 20:21:33 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:33.188618 | orchestrator | 2025-07-06 20:21:33 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:33.189385 | orchestrator | 2025-07-06 20:21:33 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:33.189571 | orchestrator | 2025-07-06 20:21:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:36.216380 | orchestrator | 2025-07-06 20:21:36 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:36.216610 | orchestrator | 2025-07-06 20:21:36 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:36.217338 | orchestrator | 2025-07-06 20:21:36 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:36.218078 | orchestrator | 2025-07-06 20:21:36 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:36.221563 | orchestrator | 2025-07-06 20:21:36 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:36.221680 | orchestrator | 2025-07-06 20:21:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:39.250790 | orchestrator | 2025-07-06 20:21:39 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:39.251329 | orchestrator | 2025-07-06 20:21:39 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:39.251872 | orchestrator | 2025-07-06 20:21:39 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:39.253060 | orchestrator | 2025-07-06 20:21:39 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:39.253686 | orchestrator | 2025-07-06 20:21:39 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:39.253906 | orchestrator | 2025-07-06 20:21:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:42.279493 | orchestrator | 2025-07-06 20:21:42 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:42.283233 | orchestrator | 2025-07-06 20:21:42 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:42.283385 | orchestrator | 2025-07-06 20:21:42 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:42.284125 | orchestrator | 2025-07-06 20:21:42 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:42.285554 | orchestrator | 2025-07-06 20:21:42 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:42.286256 | orchestrator | 2025-07-06 20:21:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:45.309686 | orchestrator | 2025-07-06 20:21:45 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:45.309919 | orchestrator | 2025-07-06 20:21:45 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:45.310686 | orchestrator | 2025-07-06 20:21:45 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:45.311593 | orchestrator | 2025-07-06 20:21:45 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:45.312122 | orchestrator | 2025-07-06 20:21:45 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:45.312213 | orchestrator | 2025-07-06 20:21:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:48.353530 | orchestrator | 2025-07-06 20:21:48 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:48.353625 | orchestrator | 2025-07-06 20:21:48 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:48.354091 | orchestrator | 2025-07-06 20:21:48 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:48.355197 | orchestrator | 2025-07-06 20:21:48 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:48.355791 | orchestrator | 2025-07-06 20:21:48 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:48.355851 | orchestrator | 2025-07-06 20:21:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:51.382313 | orchestrator | 2025-07-06 20:21:51 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:51.382431 | orchestrator | 2025-07-06 20:21:51 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:51.384654 | orchestrator | 2025-07-06 20:21:51 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:51.384980 | orchestrator | 2025-07-06 20:21:51 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:51.385659 | orchestrator | 2025-07-06 20:21:51 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:51.385686 | orchestrator | 2025-07-06 20:21:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:54.418535 | orchestrator | 2025-07-06 20:21:54 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:54.418649 | orchestrator | 2025-07-06 20:21:54 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:54.419182 | orchestrator | 2025-07-06 20:21:54 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:54.421179 | orchestrator | 2025-07-06 20:21:54 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:54.421519 | orchestrator | 2025-07-06 20:21:54 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:54.421549 | orchestrator | 2025-07-06 20:21:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:21:57.448845 | orchestrator | 2025-07-06 20:21:57 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:21:57.449051 | orchestrator | 2025-07-06 20:21:57 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:21:57.449830 | orchestrator | 2025-07-06 20:21:57 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:21:57.450506 | orchestrator | 2025-07-06 20:21:57 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:21:57.451628 | orchestrator | 2025-07-06 20:21:57 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:21:57.451727 | orchestrator | 2025-07-06 20:21:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:00.480501 | orchestrator | 2025-07-06 20:22:00 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:00.480948 | orchestrator | 2025-07-06 20:22:00 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:00.481529 | orchestrator | 2025-07-06 20:22:00 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:22:00.482251 | orchestrator | 2025-07-06 20:22:00 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:00.482929 | orchestrator | 2025-07-06 20:22:00 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:00.483070 | orchestrator | 2025-07-06 20:22:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:03.511659 | orchestrator | 2025-07-06 20:22:03 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:03.513210 | orchestrator | 2025-07-06 20:22:03 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:03.515931 | orchestrator | 2025-07-06 20:22:03 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:22:03.516618 | orchestrator | 2025-07-06 20:22:03 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:03.517146 | orchestrator | 2025-07-06 20:22:03 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:03.517219 | orchestrator | 2025-07-06 20:22:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:06.542451 | orchestrator | 2025-07-06 20:22:06 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:06.542567 | orchestrator | 2025-07-06 20:22:06 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:06.542779 | orchestrator | 2025-07-06 20:22:06 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:22:06.543421 | orchestrator | 2025-07-06 20:22:06 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:06.543943 | orchestrator | 2025-07-06 20:22:06 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:06.544049 | orchestrator | 2025-07-06 20:22:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:09.565843 | orchestrator | 2025-07-06 20:22:09 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:09.565941 | orchestrator | 2025-07-06 20:22:09 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:09.566436 | orchestrator | 2025-07-06 20:22:09 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:22:09.566961 | orchestrator | 2025-07-06 20:22:09 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:09.567602 | orchestrator | 2025-07-06 20:22:09 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:09.567625 | orchestrator | 2025-07-06 20:22:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:12.616808 | orchestrator | 2025-07-06 20:22:12 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:12.617018 | orchestrator | 2025-07-06 20:22:12 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:12.617831 | orchestrator | 2025-07-06 20:22:12 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state STARTED 2025-07-06 20:22:12.618335 | orchestrator | 2025-07-06 20:22:12 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:12.618897 | orchestrator | 2025-07-06 20:22:12 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:12.618918 | orchestrator | 2025-07-06 20:22:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:15.648838 | orchestrator | 2025-07-06 20:22:15 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:15.649047 | orchestrator | 2025-07-06 20:22:15 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:15.649617 | orchestrator | 2025-07-06 20:22:15 | INFO  | Task c40f7884-0532-4324-99b0-1dc59b758d54 is in state SUCCESS 2025-07-06 20:22:15.650007 | orchestrator | 2025-07-06 20:22:15.650086 | orchestrator | 2025-07-06 20:22:15.650129 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-07-06 20:22:15.650141 | orchestrator | 2025-07-06 20:22:15.650170 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-07-06 20:22:15.650182 | orchestrator | Sunday 06 July 2025 20:19:53 +0000 (0:00:00.218) 0:00:00.218 *********** 2025-07-06 20:22:15.650194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-07-06 20:22:15.650206 | orchestrator | 2025-07-06 20:22:15.650218 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-07-06 20:22:15.650229 | orchestrator | Sunday 06 July 2025 20:19:53 +0000 (0:00:00.205) 0:00:00.424 *********** 2025-07-06 20:22:15.650240 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-07-06 20:22:15.650251 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-07-06 20:22:15.650263 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-07-06 20:22:15.650274 | orchestrator | 2025-07-06 20:22:15.650285 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-07-06 20:22:15.650376 | orchestrator | Sunday 06 July 2025 20:19:54 +0000 (0:00:01.101) 0:00:01.526 *********** 2025-07-06 20:22:15.650398 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-07-06 20:22:15.650415 | orchestrator | 2025-07-06 20:22:15.650434 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-07-06 20:22:15.650452 | orchestrator | Sunday 06 July 2025 20:19:55 +0000 (0:00:01.030) 0:00:02.556 *********** 2025-07-06 20:22:15.650470 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.650487 | orchestrator | 2025-07-06 20:22:15.650505 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-07-06 20:22:15.650524 | orchestrator | Sunday 06 July 2025 20:19:56 +0000 (0:00:00.846) 0:00:03.402 *********** 2025-07-06 20:22:15.650542 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.650561 | orchestrator | 2025-07-06 20:22:15.650579 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-07-06 20:22:15.650597 | orchestrator | Sunday 06 July 2025 20:19:57 +0000 (0:00:00.770) 0:00:04.173 *********** 2025-07-06 20:22:15.650613 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-07-06 20:22:15.650635 | orchestrator | ok: [testbed-manager] 2025-07-06 20:22:15.650659 | orchestrator | 2025-07-06 20:22:15.650714 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-07-06 20:22:15.650736 | orchestrator | Sunday 06 July 2025 20:20:33 +0000 (0:00:36.616) 0:00:40.790 *********** 2025-07-06 20:22:15.650755 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-07-06 20:22:15.650773 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-07-06 20:22:15.650785 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-07-06 20:22:15.650796 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-07-06 20:22:15.650820 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-07-06 20:22:15.650830 | orchestrator | 2025-07-06 20:22:15.650841 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-07-06 20:22:15.650852 | orchestrator | Sunday 06 July 2025 20:20:37 +0000 (0:00:03.660) 0:00:44.450 *********** 2025-07-06 20:22:15.650862 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-07-06 20:22:15.650873 | orchestrator | 2025-07-06 20:22:15.650884 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-07-06 20:22:15.650894 | orchestrator | Sunday 06 July 2025 20:20:37 +0000 (0:00:00.389) 0:00:44.840 *********** 2025-07-06 20:22:15.650905 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:22:15.650916 | orchestrator | 2025-07-06 20:22:15.650926 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-07-06 20:22:15.650937 | orchestrator | Sunday 06 July 2025 20:20:37 +0000 (0:00:00.112) 0:00:44.953 *********** 2025-07-06 20:22:15.650948 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:22:15.650958 | orchestrator | 2025-07-06 20:22:15.650969 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-07-06 20:22:15.650980 | orchestrator | Sunday 06 July 2025 20:20:38 +0000 (0:00:00.270) 0:00:45.223 *********** 2025-07-06 20:22:15.650990 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.651001 | orchestrator | 2025-07-06 20:22:15.651011 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-07-06 20:22:15.651022 | orchestrator | Sunday 06 July 2025 20:20:39 +0000 (0:00:01.594) 0:00:46.817 *********** 2025-07-06 20:22:15.651033 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.651043 | orchestrator | 2025-07-06 20:22:15.651128 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-07-06 20:22:15.651141 | orchestrator | Sunday 06 July 2025 20:20:40 +0000 (0:00:00.656) 0:00:47.474 *********** 2025-07-06 20:22:15.651152 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.651163 | orchestrator | 2025-07-06 20:22:15.651174 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-07-06 20:22:15.651195 | orchestrator | Sunday 06 July 2025 20:20:40 +0000 (0:00:00.516) 0:00:47.991 *********** 2025-07-06 20:22:15.651229 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-07-06 20:22:15.651248 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-07-06 20:22:15.651265 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-07-06 20:22:15.651284 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-07-06 20:22:15.651301 | orchestrator | 2025-07-06 20:22:15.651312 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:22:15.651324 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:22:15.651336 | orchestrator | 2025-07-06 20:22:15.651346 | orchestrator | 2025-07-06 20:22:15.651374 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:22:15.651386 | orchestrator | Sunday 06 July 2025 20:20:42 +0000 (0:00:01.260) 0:00:49.251 *********** 2025-07-06 20:22:15.651405 | orchestrator | =============================================================================== 2025-07-06 20:22:15.651416 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.62s 2025-07-06 20:22:15.651427 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.66s 2025-07-06 20:22:15.651437 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.59s 2025-07-06 20:22:15.651448 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.26s 2025-07-06 20:22:15.651459 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.10s 2025-07-06 20:22:15.651469 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.03s 2025-07-06 20:22:15.651480 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.85s 2025-07-06 20:22:15.651490 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.77s 2025-07-06 20:22:15.651501 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.66s 2025-07-06 20:22:15.651512 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.52s 2025-07-06 20:22:15.651522 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.39s 2025-07-06 20:22:15.651533 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.27s 2025-07-06 20:22:15.651543 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-07-06 20:22:15.651554 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2025-07-06 20:22:15.651564 | orchestrator | 2025-07-06 20:22:15.651575 | orchestrator | 2025-07-06 20:22:15.651586 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:22:15.651596 | orchestrator | 2025-07-06 20:22:15.651607 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:22:15.651617 | orchestrator | Sunday 06 July 2025 20:20:47 +0000 (0:00:00.147) 0:00:00.147 *********** 2025-07-06 20:22:15.651628 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:22:15.651639 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:22:15.651650 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:22:15.651660 | orchestrator | 2025-07-06 20:22:15.651671 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:22:15.651682 | orchestrator | Sunday 06 July 2025 20:20:47 +0000 (0:00:00.245) 0:00:00.392 *********** 2025-07-06 20:22:15.651692 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-06 20:22:15.651703 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-06 20:22:15.651714 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-06 20:22:15.651724 | orchestrator | 2025-07-06 20:22:15.651735 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-07-06 20:22:15.651745 | orchestrator | 2025-07-06 20:22:15.651756 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-07-06 20:22:15.651767 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:00.720) 0:00:01.112 *********** 2025-07-06 20:22:15.651785 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:22:15.651796 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:22:15.651807 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:22:15.651817 | orchestrator | 2025-07-06 20:22:15.651828 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:22:15.651840 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:22:15.651852 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:22:15.651863 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:22:15.651873 | orchestrator | 2025-07-06 20:22:15.651884 | orchestrator | 2025-07-06 20:22:15.651895 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:22:15.651905 | orchestrator | Sunday 06 July 2025 20:20:49 +0000 (0:00:00.923) 0:00:02.036 *********** 2025-07-06 20:22:15.651916 | orchestrator | =============================================================================== 2025-07-06 20:22:15.651926 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.92s 2025-07-06 20:22:15.651937 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-07-06 20:22:15.651948 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.25s 2025-07-06 20:22:15.651958 | orchestrator | 2025-07-06 20:22:15.651969 | orchestrator | 2025-07-06 20:22:15.651979 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-07-06 20:22:15.651990 | orchestrator | 2025-07-06 20:22:15.652001 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-07-06 20:22:15.652011 | orchestrator | Sunday 06 July 2025 20:20:46 +0000 (0:00:00.293) 0:00:00.293 *********** 2025-07-06 20:22:15.652022 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.652033 | orchestrator | 2025-07-06 20:22:15.652043 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-07-06 20:22:15.652054 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:02.137) 0:00:02.431 *********** 2025-07-06 20:22:15.652064 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.652075 | orchestrator | 2025-07-06 20:22:15.652086 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-07-06 20:22:15.652127 | orchestrator | Sunday 06 July 2025 20:20:49 +0000 (0:00:00.890) 0:00:03.322 *********** 2025-07-06 20:22:15.652149 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.652168 | orchestrator | 2025-07-06 20:22:15.652188 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-07-06 20:22:15.652208 | orchestrator | Sunday 06 July 2025 20:20:50 +0000 (0:00:00.837) 0:00:04.159 *********** 2025-07-06 20:22:15.652219 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.652229 | orchestrator | 2025-07-06 20:22:15.652246 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-07-06 20:22:15.652257 | orchestrator | Sunday 06 July 2025 20:20:51 +0000 (0:00:00.991) 0:00:05.151 *********** 2025-07-06 20:22:15.652268 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.652278 | orchestrator | 2025-07-06 20:22:15.652289 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-07-06 20:22:15.652300 | orchestrator | Sunday 06 July 2025 20:20:52 +0000 (0:00:01.060) 0:00:06.211 *********** 2025-07-06 20:22:15.652310 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.652321 | orchestrator | 2025-07-06 20:22:15.652331 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-07-06 20:22:15.652342 | orchestrator | Sunday 06 July 2025 20:20:52 +0000 (0:00:00.872) 0:00:07.084 *********** 2025-07-06 20:22:15.652352 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.652363 | orchestrator | 2025-07-06 20:22:15.652373 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-07-06 20:22:15.652391 | orchestrator | Sunday 06 July 2025 20:20:54 +0000 (0:00:01.198) 0:00:08.282 *********** 2025-07-06 20:22:15.652402 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.652413 | orchestrator | 2025-07-06 20:22:15.652424 | orchestrator | TASK [Create admin user] ******************************************************* 2025-07-06 20:22:15.652434 | orchestrator | Sunday 06 July 2025 20:20:55 +0000 (0:00:00.990) 0:00:09.273 *********** 2025-07-06 20:22:15.652445 | orchestrator | changed: [testbed-manager] 2025-07-06 20:22:15.652456 | orchestrator | 2025-07-06 20:22:15.652466 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-07-06 20:22:15.652477 | orchestrator | Sunday 06 July 2025 20:21:48 +0000 (0:00:53.284) 0:01:02.557 *********** 2025-07-06 20:22:15.652487 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:22:15.652498 | orchestrator | 2025-07-06 20:22:15.652509 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-06 20:22:15.652519 | orchestrator | 2025-07-06 20:22:15.652530 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-06 20:22:15.652540 | orchestrator | Sunday 06 July 2025 20:21:48 +0000 (0:00:00.147) 0:01:02.705 *********** 2025-07-06 20:22:15.652551 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:15.652561 | orchestrator | 2025-07-06 20:22:15.652572 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-06 20:22:15.652583 | orchestrator | 2025-07-06 20:22:15.652593 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-06 20:22:15.652604 | orchestrator | Sunday 06 July 2025 20:21:50 +0000 (0:00:01.494) 0:01:04.199 *********** 2025-07-06 20:22:15.652615 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:22:15.652625 | orchestrator | 2025-07-06 20:22:15.652636 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-06 20:22:15.652647 | orchestrator | 2025-07-06 20:22:15.652657 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-06 20:22:15.652668 | orchestrator | Sunday 06 July 2025 20:22:01 +0000 (0:00:11.323) 0:01:15.522 *********** 2025-07-06 20:22:15.652678 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:22:15.652689 | orchestrator | 2025-07-06 20:22:15.652699 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:22:15.652710 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-06 20:22:15.652721 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:22:15.652732 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:22:15.652743 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:22:15.652754 | orchestrator | 2025-07-06 20:22:15.652764 | orchestrator | 2025-07-06 20:22:15.652775 | orchestrator | 2025-07-06 20:22:15.652785 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:22:15.652796 | orchestrator | Sunday 06 July 2025 20:22:12 +0000 (0:00:11.114) 0:01:26.637 *********** 2025-07-06 20:22:15.652807 | orchestrator | =============================================================================== 2025-07-06 20:22:15.652817 | orchestrator | Create admin user ------------------------------------------------------ 53.28s 2025-07-06 20:22:15.652828 | orchestrator | Restart ceph manager service ------------------------------------------- 23.93s 2025-07-06 20:22:15.652838 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.14s 2025-07-06 20:22:15.652849 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.20s 2025-07-06 20:22:15.652859 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2025-07-06 20:22:15.652870 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.99s 2025-07-06 20:22:15.652887 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.99s 2025-07-06 20:22:15.652897 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.89s 2025-07-06 20:22:15.652908 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.87s 2025-07-06 20:22:15.652918 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.84s 2025-07-06 20:22:15.652929 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.15s 2025-07-06 20:22:15.653058 | orchestrator | 2025-07-06 20:22:15 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:15.653073 | orchestrator | 2025-07-06 20:22:15 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:15.653166 | orchestrator | 2025-07-06 20:22:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:18.681392 | orchestrator | 2025-07-06 20:22:18 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:18.681486 | orchestrator | 2025-07-06 20:22:18 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:18.681501 | orchestrator | 2025-07-06 20:22:18 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:18.682261 | orchestrator | 2025-07-06 20:22:18 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:18.682291 | orchestrator | 2025-07-06 20:22:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:21.709344 | orchestrator | 2025-07-06 20:22:21 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:21.709449 | orchestrator | 2025-07-06 20:22:21 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:21.710014 | orchestrator | 2025-07-06 20:22:21 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:21.710659 | orchestrator | 2025-07-06 20:22:21 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:21.710682 | orchestrator | 2025-07-06 20:22:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:24.743700 | orchestrator | 2025-07-06 20:22:24 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:24.743912 | orchestrator | 2025-07-06 20:22:24 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:24.746974 | orchestrator | 2025-07-06 20:22:24 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:24.747501 | orchestrator | 2025-07-06 20:22:24 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:24.747525 | orchestrator | 2025-07-06 20:22:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:27.779803 | orchestrator | 2025-07-06 20:22:27 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:27.780176 | orchestrator | 2025-07-06 20:22:27 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:27.780662 | orchestrator | 2025-07-06 20:22:27 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:27.781289 | orchestrator | 2025-07-06 20:22:27 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:27.781875 | orchestrator | 2025-07-06 20:22:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:30.806402 | orchestrator | 2025-07-06 20:22:30 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:30.806630 | orchestrator | 2025-07-06 20:22:30 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:30.807061 | orchestrator | 2025-07-06 20:22:30 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:30.807744 | orchestrator | 2025-07-06 20:22:30 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:30.807768 | orchestrator | 2025-07-06 20:22:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:33.834987 | orchestrator | 2025-07-06 20:22:33 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:33.835725 | orchestrator | 2025-07-06 20:22:33 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:33.836670 | orchestrator | 2025-07-06 20:22:33 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:33.838546 | orchestrator | 2025-07-06 20:22:33 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:33.838598 | orchestrator | 2025-07-06 20:22:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:36.867708 | orchestrator | 2025-07-06 20:22:36 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:36.868245 | orchestrator | 2025-07-06 20:22:36 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:36.868778 | orchestrator | 2025-07-06 20:22:36 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:36.869524 | orchestrator | 2025-07-06 20:22:36 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:36.869927 | orchestrator | 2025-07-06 20:22:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:39.905866 | orchestrator | 2025-07-06 20:22:39 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:39.906398 | orchestrator | 2025-07-06 20:22:39 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:39.908531 | orchestrator | 2025-07-06 20:22:39 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:39.910759 | orchestrator | 2025-07-06 20:22:39 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:39.910798 | orchestrator | 2025-07-06 20:22:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:42.947494 | orchestrator | 2025-07-06 20:22:42 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:42.948889 | orchestrator | 2025-07-06 20:22:42 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:42.950310 | orchestrator | 2025-07-06 20:22:42 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:42.950505 | orchestrator | 2025-07-06 20:22:42 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:42.951440 | orchestrator | 2025-07-06 20:22:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:45.983375 | orchestrator | 2025-07-06 20:22:45 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state STARTED 2025-07-06 20:22:45.984943 | orchestrator | 2025-07-06 20:22:45 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:45.987540 | orchestrator | 2025-07-06 20:22:45 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:45.988028 | orchestrator | 2025-07-06 20:22:45 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:45.988080 | orchestrator | 2025-07-06 20:22:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:49.023200 | orchestrator | 2025-07-06 20:22:49.023313 | orchestrator | 2025-07-06 20:22:49.023331 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:22:49.023344 | orchestrator | 2025-07-06 20:22:49.023355 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:22:49.023367 | orchestrator | Sunday 06 July 2025 20:20:47 +0000 (0:00:00.234) 0:00:00.234 *********** 2025-07-06 20:22:49.023378 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:22:49.023415 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:22:49.023428 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:22:49.023439 | orchestrator | 2025-07-06 20:22:49.023450 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:22:49.023461 | orchestrator | Sunday 06 July 2025 20:20:47 +0000 (0:00:00.255) 0:00:00.490 *********** 2025-07-06 20:22:49.023472 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-07-06 20:22:49.023484 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-07-06 20:22:49.023494 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-07-06 20:22:49.023505 | orchestrator | 2025-07-06 20:22:49.023516 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-07-06 20:22:49.023527 | orchestrator | 2025-07-06 20:22:49.023538 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-06 20:22:49.023549 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:00.465) 0:00:00.955 *********** 2025-07-06 20:22:49.023561 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:22:49.023572 | orchestrator | 2025-07-06 20:22:49.023583 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-07-06 20:22:49.023594 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:00.494) 0:00:01.449 *********** 2025-07-06 20:22:49.023605 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-07-06 20:22:49.023616 | orchestrator | 2025-07-06 20:22:49.023627 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-07-06 20:22:49.023638 | orchestrator | Sunday 06 July 2025 20:20:52 +0000 (0:00:03.958) 0:00:05.408 *********** 2025-07-06 20:22:49.023651 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-07-06 20:22:49.023664 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-07-06 20:22:49.023677 | orchestrator | 2025-07-06 20:22:49.023689 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-07-06 20:22:49.023702 | orchestrator | Sunday 06 July 2025 20:20:59 +0000 (0:00:06.468) 0:00:11.876 *********** 2025-07-06 20:22:49.023714 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-07-06 20:22:49.023728 | orchestrator | 2025-07-06 20:22:49.023740 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-07-06 20:22:49.023753 | orchestrator | Sunday 06 July 2025 20:21:02 +0000 (0:00:03.184) 0:00:15.061 *********** 2025-07-06 20:22:49.023766 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:22:49.023779 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-07-06 20:22:49.023791 | orchestrator | 2025-07-06 20:22:49.023804 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-07-06 20:22:49.023817 | orchestrator | Sunday 06 July 2025 20:21:06 +0000 (0:00:03.819) 0:00:18.881 *********** 2025-07-06 20:22:49.023845 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:22:49.023860 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-07-06 20:22:49.023873 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-07-06 20:22:49.023886 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-07-06 20:22:49.023899 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-07-06 20:22:49.023911 | orchestrator | 2025-07-06 20:22:49.023924 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-07-06 20:22:49.023962 | orchestrator | Sunday 06 July 2025 20:21:20 +0000 (0:00:14.582) 0:00:33.463 *********** 2025-07-06 20:22:49.023976 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-07-06 20:22:49.023989 | orchestrator | 2025-07-06 20:22:49.024001 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-07-06 20:22:49.024012 | orchestrator | Sunday 06 July 2025 20:21:25 +0000 (0:00:04.401) 0:00:37.865 *********** 2025-07-06 20:22:49.024027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.024061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.024074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.024087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024210 | orchestrator | 2025-07-06 20:22:49.024222 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-07-06 20:22:49.024233 | orchestrator | Sunday 06 July 2025 20:21:27 +0000 (0:00:02.350) 0:00:40.215 *********** 2025-07-06 20:22:49.024244 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-07-06 20:22:49.024255 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-07-06 20:22:49.024265 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-07-06 20:22:49.024276 | orchestrator | 2025-07-06 20:22:49.024287 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-07-06 20:22:49.024297 | orchestrator | Sunday 06 July 2025 20:21:28 +0000 (0:00:01.038) 0:00:41.253 *********** 2025-07-06 20:22:49.024308 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:49.024319 | orchestrator | 2025-07-06 20:22:49.024337 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-07-06 20:22:49.024348 | orchestrator | Sunday 06 July 2025 20:21:28 +0000 (0:00:00.123) 0:00:41.377 *********** 2025-07-06 20:22:49.024358 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:49.024369 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:49.024380 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:49.024391 | orchestrator | 2025-07-06 20:22:49.024402 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-06 20:22:49.024412 | orchestrator | Sunday 06 July 2025 20:21:29 +0000 (0:00:00.591) 0:00:41.969 *********** 2025-07-06 20:22:49.024429 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:22:49.024440 | orchestrator | 2025-07-06 20:22:49.024451 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-07-06 20:22:49.024462 | orchestrator | Sunday 06 July 2025 20:21:30 +0000 (0:00:01.442) 0:00:43.411 *********** 2025-07-06 20:22:49.024473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.024493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.024505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.024517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024594 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.024605 | orchestrator | 2025-07-06 20:22:49.024616 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-07-06 20:22:49.024627 | orchestrator | Sunday 06 July 2025 20:21:35 +0000 (0:00:04.679) 0:00:48.090 *********** 2025-07-06 20:22:49.024639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:22:49.024661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.024674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.024685 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:49.024703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:22:49.024715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.024726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.024775 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:49.024788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:22:49.024807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.024827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.024847 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:49.024866 | orchestrator | 2025-07-06 20:22:49.024886 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-07-06 20:22:49.024904 | orchestrator | Sunday 06 July 2025 20:21:36 +0000 (0:00:01.007) 0:00:49.098 *********** 2025-07-06 20:22:49.024935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:22:49.024956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.024997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.025019 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:49.025037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:22:49.025049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.025061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.025072 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:49.025091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:22:49.025151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.025163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.025174 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:49.025185 | orchestrator | 2025-07-06 20:22:49.025200 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-07-06 20:22:49.025220 | orchestrator | Sunday 06 July 2025 20:21:37 +0000 (0:00:01.358) 0:00:50.456 *********** 2025-07-06 20:22:49.025258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.025291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fq2025-07-06 20:22:49 | INFO  | Task e1c274c6-39cf-4eef-a6ae-af863a58c3ba is in state SUCCESS 2025-07-06 20:22:49.025788 | orchestrator | dn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.025834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.025872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.025904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.025925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.025944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.025977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.026092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.026216 | orchestrator | 2025-07-06 20:22:49.026235 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-07-06 20:22:49.026253 | orchestrator | Sunday 06 July 2025 20:21:41 +0000 (0:00:03.301) 0:00:53.757 *********** 2025-07-06 20:22:49.026270 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:49.026288 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:22:49.026307 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:22:49.026326 | orchestrator | 2025-07-06 20:22:49.026344 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-07-06 20:22:49.026362 | orchestrator | Sunday 06 July 2025 20:21:44 +0000 (0:00:02.962) 0:00:56.720 *********** 2025-07-06 20:22:49.026383 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:22:49.026403 | orchestrator | 2025-07-06 20:22:49.026422 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-07-06 20:22:49.026442 | orchestrator | Sunday 06 July 2025 20:21:46 +0000 (0:00:02.083) 0:00:58.804 *********** 2025-07-06 20:22:49.026462 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:49.026478 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:49.026492 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:49.026504 | orchestrator | 2025-07-06 20:22:49.026517 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-07-06 20:22:49.026529 | orchestrator | Sunday 06 July 2025 20:21:47 +0000 (0:00:01.046) 0:00:59.851 *********** 2025-07-06 20:22:49.026553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.026568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.026607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.026621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.026633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.026644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.026660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.026672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.026690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.026701 | orchestrator | 2025-07-06 20:22:49.026712 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-07-06 20:22:49.026729 | orchestrator | Sunday 06 July 2025 20:21:58 +0000 (0:00:10.731) 0:01:10.582 *********** 2025-07-06 20:22:49.026740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:22:49.026752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.026763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.026774 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:49.026790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:22:49.026809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.026828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.026840 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:49.026851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-06 20:22:49.026863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.026879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:22:49.026890 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:49.026901 | orchestrator | 2025-07-06 20:22:49.026912 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-07-06 20:22:49.026923 | orchestrator | Sunday 06 July 2025 20:21:58 +0000 (0:00:00.619) 0:01:11.202 *********** 2025-07-06 20:22:49.026941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.026960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.026973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-06 20:22:49.026984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.027005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.027023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.027034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.027052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.027064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:22:49.027075 | orchestrator | 2025-07-06 20:22:49.027086 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-06 20:22:49.027120 | orchestrator | Sunday 06 July 2025 20:22:02 +0000 (0:00:03.534) 0:01:14.737 *********** 2025-07-06 20:22:49.027131 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:22:49.027142 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:22:49.027153 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:22:49.027164 | orchestrator | 2025-07-06 20:22:49.027175 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-07-06 20:22:49.027185 | orchestrator | Sunday 06 July 2025 20:22:02 +0000 (0:00:00.282) 0:01:15.019 *********** 2025-07-06 20:22:49.027196 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:49.027207 | orchestrator | 2025-07-06 20:22:49.027217 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-07-06 20:22:49.027228 | orchestrator | Sunday 06 July 2025 20:22:04 +0000 (0:00:02.277) 0:01:17.297 *********** 2025-07-06 20:22:49.027239 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:49.027250 | orchestrator | 2025-07-06 20:22:49.027261 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-07-06 20:22:49.027271 | orchestrator | Sunday 06 July 2025 20:22:07 +0000 (0:00:02.484) 0:01:19.781 *********** 2025-07-06 20:22:49.027282 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:49.027293 | orchestrator | 2025-07-06 20:22:49.027310 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-06 20:22:49.027321 | orchestrator | Sunday 06 July 2025 20:22:18 +0000 (0:00:11.419) 0:01:31.201 *********** 2025-07-06 20:22:49.027332 | orchestrator | 2025-07-06 20:22:49.027342 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-06 20:22:49.027353 | orchestrator | Sunday 06 July 2025 20:22:18 +0000 (0:00:00.263) 0:01:31.464 *********** 2025-07-06 20:22:49.027364 | orchestrator | 2025-07-06 20:22:49.027380 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-06 20:22:49.027391 | orchestrator | Sunday 06 July 2025 20:22:19 +0000 (0:00:00.112) 0:01:31.577 *********** 2025-07-06 20:22:49.027402 | orchestrator | 2025-07-06 20:22:49.027413 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-07-06 20:22:49.027423 | orchestrator | Sunday 06 July 2025 20:22:19 +0000 (0:00:00.111) 0:01:31.689 *********** 2025-07-06 20:22:49.027434 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:49.027445 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:22:49.027455 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:22:49.027466 | orchestrator | 2025-07-06 20:22:49.027477 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-07-06 20:22:49.027488 | orchestrator | Sunday 06 July 2025 20:22:26 +0000 (0:00:07.794) 0:01:39.483 *********** 2025-07-06 20:22:49.027498 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:49.027509 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:22:49.027520 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:22:49.027530 | orchestrator | 2025-07-06 20:22:49.027541 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-07-06 20:22:49.027552 | orchestrator | Sunday 06 July 2025 20:22:37 +0000 (0:00:10.751) 0:01:50.235 *********** 2025-07-06 20:22:49.027563 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:22:49.027573 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:22:49.027584 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:22:49.027595 | orchestrator | 2025-07-06 20:22:49.027605 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:22:49.027617 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:22:49.027628 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:22:49.027639 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:22:49.027650 | orchestrator | 2025-07-06 20:22:49.027660 | orchestrator | 2025-07-06 20:22:49.027671 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:22:49.027682 | orchestrator | Sunday 06 July 2025 20:22:45 +0000 (0:00:07.728) 0:01:57.964 *********** 2025-07-06 20:22:49.027692 | orchestrator | =============================================================================== 2025-07-06 20:22:49.027703 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.58s 2025-07-06 20:22:49.027720 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.42s 2025-07-06 20:22:49.027731 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.75s 2025-07-06 20:22:49.027742 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 10.73s 2025-07-06 20:22:49.027752 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.79s 2025-07-06 20:22:49.027763 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.73s 2025-07-06 20:22:49.027773 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.47s 2025-07-06 20:22:49.027784 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.68s 2025-07-06 20:22:49.027795 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.40s 2025-07-06 20:22:49.027811 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.96s 2025-07-06 20:22:49.027822 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.82s 2025-07-06 20:22:49.027833 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.53s 2025-07-06 20:22:49.027844 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.30s 2025-07-06 20:22:49.027854 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.18s 2025-07-06 20:22:49.027865 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.96s 2025-07-06 20:22:49.027876 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.48s 2025-07-06 20:22:49.027886 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.35s 2025-07-06 20:22:49.027897 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.28s 2025-07-06 20:22:49.027908 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 2.08s 2025-07-06 20:22:49.027918 | orchestrator | barbican : include_tasks ------------------------------------------------ 1.44s 2025-07-06 20:22:49.027929 | orchestrator | 2025-07-06 20:22:49 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:49.027940 | orchestrator | 2025-07-06 20:22:49 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:22:49.027951 | orchestrator | 2025-07-06 20:22:49 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:49.027962 | orchestrator | 2025-07-06 20:22:49 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:49.027973 | orchestrator | 2025-07-06 20:22:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:52.068312 | orchestrator | 2025-07-06 20:22:52 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:52.070226 | orchestrator | 2025-07-06 20:22:52 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:22:52.071693 | orchestrator | 2025-07-06 20:22:52 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:52.072446 | orchestrator | 2025-07-06 20:22:52 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:52.072605 | orchestrator | 2025-07-06 20:22:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:55.103810 | orchestrator | 2025-07-06 20:22:55 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:55.106902 | orchestrator | 2025-07-06 20:22:55 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:22:55.107381 | orchestrator | 2025-07-06 20:22:55 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:55.107961 | orchestrator | 2025-07-06 20:22:55 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:55.108167 | orchestrator | 2025-07-06 20:22:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:22:58.139085 | orchestrator | 2025-07-06 20:22:58 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:22:58.139821 | orchestrator | 2025-07-06 20:22:58 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:22:58.141532 | orchestrator | 2025-07-06 20:22:58 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:22:58.144010 | orchestrator | 2025-07-06 20:22:58 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:22:58.144034 | orchestrator | 2025-07-06 20:22:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:01.178917 | orchestrator | 2025-07-06 20:23:01 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:01.179036 | orchestrator | 2025-07-06 20:23:01 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:01.179761 | orchestrator | 2025-07-06 20:23:01 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:01.180373 | orchestrator | 2025-07-06 20:23:01 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:01.180396 | orchestrator | 2025-07-06 20:23:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:04.225086 | orchestrator | 2025-07-06 20:23:04 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:04.226554 | orchestrator | 2025-07-06 20:23:04 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:04.227196 | orchestrator | 2025-07-06 20:23:04 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:04.227881 | orchestrator | 2025-07-06 20:23:04 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:04.228167 | orchestrator | 2025-07-06 20:23:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:07.258501 | orchestrator | 2025-07-06 20:23:07 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:07.258612 | orchestrator | 2025-07-06 20:23:07 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:07.258629 | orchestrator | 2025-07-06 20:23:07 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:07.258653 | orchestrator | 2025-07-06 20:23:07 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:07.258665 | orchestrator | 2025-07-06 20:23:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:10.289376 | orchestrator | 2025-07-06 20:23:10 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:10.289486 | orchestrator | 2025-07-06 20:23:10 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:10.289733 | orchestrator | 2025-07-06 20:23:10 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:10.292586 | orchestrator | 2025-07-06 20:23:10 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:10.292617 | orchestrator | 2025-07-06 20:23:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:13.319688 | orchestrator | 2025-07-06 20:23:13 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:13.320522 | orchestrator | 2025-07-06 20:23:13 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:13.323677 | orchestrator | 2025-07-06 20:23:13 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:13.324336 | orchestrator | 2025-07-06 20:23:13 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:13.324413 | orchestrator | 2025-07-06 20:23:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:16.351183 | orchestrator | 2025-07-06 20:23:16 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:16.353741 | orchestrator | 2025-07-06 20:23:16 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:16.355476 | orchestrator | 2025-07-06 20:23:16 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:16.356414 | orchestrator | 2025-07-06 20:23:16 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:16.356563 | orchestrator | 2025-07-06 20:23:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:19.399423 | orchestrator | 2025-07-06 20:23:19 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:19.402713 | orchestrator | 2025-07-06 20:23:19 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:19.402756 | orchestrator | 2025-07-06 20:23:19 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:19.402767 | orchestrator | 2025-07-06 20:23:19 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:19.402778 | orchestrator | 2025-07-06 20:23:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:22.434380 | orchestrator | 2025-07-06 20:23:22 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:22.434484 | orchestrator | 2025-07-06 20:23:22 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:22.434498 | orchestrator | 2025-07-06 20:23:22 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:22.435165 | orchestrator | 2025-07-06 20:23:22 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:22.435193 | orchestrator | 2025-07-06 20:23:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:25.472766 | orchestrator | 2025-07-06 20:23:25 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:25.472841 | orchestrator | 2025-07-06 20:23:25 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:25.473989 | orchestrator | 2025-07-06 20:23:25 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:25.474651 | orchestrator | 2025-07-06 20:23:25 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:25.474749 | orchestrator | 2025-07-06 20:23:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:28.508099 | orchestrator | 2025-07-06 20:23:28 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:28.508260 | orchestrator | 2025-07-06 20:23:28 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:28.508282 | orchestrator | 2025-07-06 20:23:28 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:28.508302 | orchestrator | 2025-07-06 20:23:28 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:28.508334 | orchestrator | 2025-07-06 20:23:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:31.541633 | orchestrator | 2025-07-06 20:23:31 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:31.543213 | orchestrator | 2025-07-06 20:23:31 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:31.544411 | orchestrator | 2025-07-06 20:23:31 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:31.546586 | orchestrator | 2025-07-06 20:23:31 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:31.546614 | orchestrator | 2025-07-06 20:23:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:34.592266 | orchestrator | 2025-07-06 20:23:34 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:34.592476 | orchestrator | 2025-07-06 20:23:34 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:34.592554 | orchestrator | 2025-07-06 20:23:34 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:34.598196 | orchestrator | 2025-07-06 20:23:34 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:34.598239 | orchestrator | 2025-07-06 20:23:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:37.634182 | orchestrator | 2025-07-06 20:23:37 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:37.636147 | orchestrator | 2025-07-06 20:23:37 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:37.637794 | orchestrator | 2025-07-06 20:23:37 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:37.639763 | orchestrator | 2025-07-06 20:23:37 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:37.639845 | orchestrator | 2025-07-06 20:23:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:40.683073 | orchestrator | 2025-07-06 20:23:40 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:40.683306 | orchestrator | 2025-07-06 20:23:40 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:40.686464 | orchestrator | 2025-07-06 20:23:40 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:40.688057 | orchestrator | 2025-07-06 20:23:40 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:40.688101 | orchestrator | 2025-07-06 20:23:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:43.747330 | orchestrator | 2025-07-06 20:23:43 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:43.747721 | orchestrator | 2025-07-06 20:23:43 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:43.749943 | orchestrator | 2025-07-06 20:23:43 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:43.750674 | orchestrator | 2025-07-06 20:23:43 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:43.750714 | orchestrator | 2025-07-06 20:23:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:46.796624 | orchestrator | 2025-07-06 20:23:46 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:46.797809 | orchestrator | 2025-07-06 20:23:46 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:46.799607 | orchestrator | 2025-07-06 20:23:46 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:46.801850 | orchestrator | 2025-07-06 20:23:46 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:46.801875 | orchestrator | 2025-07-06 20:23:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:49.847515 | orchestrator | 2025-07-06 20:23:49 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:49.849077 | orchestrator | 2025-07-06 20:23:49 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:49.851259 | orchestrator | 2025-07-06 20:23:49 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:49.853059 | orchestrator | 2025-07-06 20:23:49 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:49.853085 | orchestrator | 2025-07-06 20:23:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:52.902217 | orchestrator | 2025-07-06 20:23:52 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:52.906003 | orchestrator | 2025-07-06 20:23:52 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:52.907854 | orchestrator | 2025-07-06 20:23:52 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:52.911029 | orchestrator | 2025-07-06 20:23:52 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:52.911444 | orchestrator | 2025-07-06 20:23:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:55.959196 | orchestrator | 2025-07-06 20:23:55 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:55.960768 | orchestrator | 2025-07-06 20:23:55 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:55.965047 | orchestrator | 2025-07-06 20:23:55 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:55.968247 | orchestrator | 2025-07-06 20:23:55 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state STARTED 2025-07-06 20:23:55.968301 | orchestrator | 2025-07-06 20:23:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:59.060852 | orchestrator | 2025-07-06 20:23:59 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:23:59.061043 | orchestrator | 2025-07-06 20:23:59 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:23:59.061768 | orchestrator | 2025-07-06 20:23:59 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:23:59.062492 | orchestrator | 2025-07-06 20:23:59 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:23:59.065423 | orchestrator | 2025-07-06 20:23:59.066157 | orchestrator | 2025-07-06 20:23:59 | INFO  | Task 17260003-634b-4cce-b77d-180c69221572 is in state SUCCESS 2025-07-06 20:23:59.066545 | orchestrator | 2025-07-06 20:23:59.066619 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:23:59.066635 | orchestrator | 2025-07-06 20:23:59.066647 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:23:59.066659 | orchestrator | Sunday 06 July 2025 20:20:53 +0000 (0:00:00.277) 0:00:00.277 *********** 2025-07-06 20:23:59.066670 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:23:59.066682 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:23:59.066693 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:23:59.066704 | orchestrator | 2025-07-06 20:23:59.066715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:23:59.066725 | orchestrator | Sunday 06 July 2025 20:20:53 +0000 (0:00:00.287) 0:00:00.564 *********** 2025-07-06 20:23:59.066737 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-07-06 20:23:59.066748 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-07-06 20:23:59.066759 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-07-06 20:23:59.066769 | orchestrator | 2025-07-06 20:23:59.066780 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-07-06 20:23:59.066791 | orchestrator | 2025-07-06 20:23:59.066801 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-06 20:23:59.066812 | orchestrator | Sunday 06 July 2025 20:20:54 +0000 (0:00:00.355) 0:00:00.920 *********** 2025-07-06 20:23:59.066823 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:23:59.066834 | orchestrator | 2025-07-06 20:23:59.066845 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-07-06 20:23:59.066856 | orchestrator | Sunday 06 July 2025 20:20:54 +0000 (0:00:00.493) 0:00:01.413 *********** 2025-07-06 20:23:59.066892 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-07-06 20:23:59.066904 | orchestrator | 2025-07-06 20:23:59.066931 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-07-06 20:23:59.066943 | orchestrator | Sunday 06 July 2025 20:20:58 +0000 (0:00:03.863) 0:00:05.277 *********** 2025-07-06 20:23:59.066953 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-07-06 20:23:59.066965 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-07-06 20:23:59.066976 | orchestrator | 2025-07-06 20:23:59.066986 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-07-06 20:23:59.066997 | orchestrator | Sunday 06 July 2025 20:21:04 +0000 (0:00:06.120) 0:00:11.397 *********** 2025-07-06 20:23:59.067008 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:23:59.067019 | orchestrator | 2025-07-06 20:23:59.067029 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-07-06 20:23:59.067040 | orchestrator | Sunday 06 July 2025 20:21:07 +0000 (0:00:03.336) 0:00:14.734 *********** 2025-07-06 20:23:59.067051 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:23:59.067062 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-07-06 20:23:59.067074 | orchestrator | 2025-07-06 20:23:59.067088 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-07-06 20:23:59.067100 | orchestrator | Sunday 06 July 2025 20:21:11 +0000 (0:00:03.822) 0:00:18.556 *********** 2025-07-06 20:23:59.067112 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:23:59.067152 | orchestrator | 2025-07-06 20:23:59.067166 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-07-06 20:23:59.067180 | orchestrator | Sunday 06 July 2025 20:21:14 +0000 (0:00:03.093) 0:00:21.650 *********** 2025-07-06 20:23:59.067193 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-07-06 20:23:59.067205 | orchestrator | 2025-07-06 20:23:59.067218 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-07-06 20:23:59.067232 | orchestrator | Sunday 06 July 2025 20:21:18 +0000 (0:00:03.604) 0:00:25.254 *********** 2025-07-06 20:23:59.067276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.067324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.067360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.067410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.067716 | orchestrator | 2025-07-06 20:23:59.067727 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-07-06 20:23:59.067739 | orchestrator | Sunday 06 July 2025 20:21:22 +0000 (0:00:03.766) 0:00:29.021 *********** 2025-07-06 20:23:59.067750 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:59.067761 | orchestrator | 2025-07-06 20:23:59.067771 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-07-06 20:23:59.067782 | orchestrator | Sunday 06 July 2025 20:21:22 +0000 (0:00:00.123) 0:00:29.145 *********** 2025-07-06 20:23:59.067792 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:59.067803 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:59.067814 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:59.067824 | orchestrator | 2025-07-06 20:23:59.067835 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-06 20:23:59.067846 | orchestrator | Sunday 06 July 2025 20:21:22 +0000 (0:00:00.230) 0:00:29.376 *********** 2025-07-06 20:23:59.067856 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:23:59.067868 | orchestrator | 2025-07-06 20:23:59.067878 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-07-06 20:23:59.067889 | orchestrator | Sunday 06 July 2025 20:21:23 +0000 (0:00:00.471) 0:00:29.847 *********** 2025-07-06 20:23:59.067906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.067933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.067953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.067981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.068312 | orchestrator | 2025-07-06 20:23:59.068323 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-07-06 20:23:59.068334 | orchestrator | Sunday 06 July 2025 20:21:29 +0000 (0:00:06.109) 0:00:35.957 *********** 2025-07-06 20:23:59.068351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.068376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:59.068395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068441 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:59.068453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.068476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:59.068494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068539 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:59.068550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.068572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:59.068592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068637 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:59.068648 | orchestrator | 2025-07-06 20:23:59.068660 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-07-06 20:23:59.068671 | orchestrator | Sunday 06 July 2025 20:21:31 +0000 (0:00:02.848) 0:00:38.805 *********** 2025-07-06 20:23:59.068682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.068702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:59.068721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.068744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:59.068812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068823 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:59.068841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068892 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:59.068903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.068920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:59.068932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.068990 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:59.069001 | orchestrator | 2025-07-06 20:23:59.069011 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-07-06 20:23:59.069023 | orchestrator | Sunday 06 July 2025 20:21:34 +0000 (0:00:02.234) 0:00:41.039 *********** 2025-07-06 20:23:59.069034 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.069051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.069071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.069083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069341 | orchestrator | 2025-07-06 20:23:59.069350 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-07-06 20:23:59.069360 | orchestrator | Sunday 06 July 2025 20:21:40 +0000 (0:00:06.734) 0:00:47.774 *********** 2025-07-06 20:23:59.069370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.069381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.069396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.069411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069680 | orchestrator | 2025-07-06 20:23:59.069691 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-07-06 20:23:59.069701 | orchestrator | Sunday 06 July 2025 20:22:02 +0000 (0:00:21.866) 0:01:09.640 *********** 2025-07-06 20:23:59.069711 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-06 20:23:59.069722 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-06 20:23:59.069731 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-06 20:23:59.069741 | orchestrator | 2025-07-06 20:23:59.069750 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-07-06 20:23:59.069760 | orchestrator | Sunday 06 July 2025 20:22:10 +0000 (0:00:07.409) 0:01:17.050 *********** 2025-07-06 20:23:59.069769 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-06 20:23:59.069779 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-06 20:23:59.069788 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-06 20:23:59.069797 | orchestrator | 2025-07-06 20:23:59.069807 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-07-06 20:23:59.069816 | orchestrator | Sunday 06 July 2025 20:22:13 +0000 (0:00:03.732) 0:01:20.782 *********** 2025-07-06 20:23:59.069826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.069843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.069871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.069899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.069928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.069938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.069953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.069964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.069985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.069996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.070072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.070171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.070191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.070208 | orchestrator | 2025-07-06 20:23:59.070225 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-07-06 20:23:59.070242 | orchestrator | Sunday 06 July 2025 20:22:17 +0000 (0:00:03.518) 0:01:24.301 *********** 2025-07-06 20:23:59.070260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.070273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.070292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.070317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.070368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.070392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.070458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.070468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_peri2025-07-06 20:23:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:23:59.070646 | orchestrator | od': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.070724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.070750 | orchestrator | 2025-07-06 20:23:59.070773 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-06 20:23:59.070795 | orchestrator | Sunday 06 July 2025 20:22:21 +0000 (0:00:03.642) 0:01:27.943 *********** 2025-07-06 20:23:59.070816 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:59.070830 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:59.070841 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:59.070851 | orchestrator | 2025-07-06 20:23:59.070863 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-07-06 20:23:59.070874 | orchestrator | Sunday 06 July 2025 20:22:21 +0000 (0:00:00.628) 0:01:28.572 *********** 2025-07-06 20:23:59.070886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.070899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:59.070948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.070991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.071004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.071015 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:59.071027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.071038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:59.071056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.071073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.071092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.071104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.071116 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:59.071158 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-06 20:23:59.071175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-06 20:23:59.071195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.071213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.071227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.071247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-06 20:23:59.071262 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:59.071275 | orchestrator | 2025-07-06 20:23:59.071288 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-07-06 20:23:59.071299 | orchestrator | Sunday 06 July 2025 20:22:22 +0000 (0:00:00.978) 0:01:29.551 *********** 2025-07-06 20:23:59.071310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.071323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.071346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-06 20:23:59.071358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-06 20:23:59.071571 | orchestrator | 2025-07-06 20:23:59.071582 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-06 20:23:59.071593 | orchestrator | Sunday 06 July 2025 20:22:27 +0000 (0:00:05.126) 0:01:34.677 *********** 2025-07-06 20:23:59.071604 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:23:59.071615 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:23:59.071626 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:23:59.071637 | orchestrator | 2025-07-06 20:23:59.071648 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-07-06 20:23:59.071659 | orchestrator | Sunday 06 July 2025 20:22:28 +0000 (0:00:00.658) 0:01:35.335 *********** 2025-07-06 20:23:59.071670 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-07-06 20:23:59.071681 | orchestrator | 2025-07-06 20:23:59.071692 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-07-06 20:23:59.071703 | orchestrator | Sunday 06 July 2025 20:22:31 +0000 (0:00:02.619) 0:01:37.955 *********** 2025-07-06 20:23:59.071713 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:23:59.071724 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-07-06 20:23:59.071740 | orchestrator | 2025-07-06 20:23:59.071751 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-07-06 20:23:59.071762 | orchestrator | Sunday 06 July 2025 20:22:33 +0000 (0:00:02.285) 0:01:40.241 *********** 2025-07-06 20:23:59.071773 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:59.071784 | orchestrator | 2025-07-06 20:23:59.071795 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-06 20:23:59.071806 | orchestrator | Sunday 06 July 2025 20:22:47 +0000 (0:00:14.583) 0:01:54.824 *********** 2025-07-06 20:23:59.071816 | orchestrator | 2025-07-06 20:23:59.071827 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-06 20:23:59.071838 | orchestrator | Sunday 06 July 2025 20:22:48 +0000 (0:00:00.199) 0:01:55.031 *********** 2025-07-06 20:23:59.071848 | orchestrator | 2025-07-06 20:23:59.071859 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-06 20:23:59.071870 | orchestrator | Sunday 06 July 2025 20:22:48 +0000 (0:00:00.202) 0:01:55.233 *********** 2025-07-06 20:23:59.071881 | orchestrator | 2025-07-06 20:23:59.071892 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-07-06 20:23:59.071902 | orchestrator | Sunday 06 July 2025 20:22:48 +0000 (0:00:00.345) 0:01:55.579 *********** 2025-07-06 20:23:59.071913 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:59.071924 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:59.071939 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:59.071957 | orchestrator | 2025-07-06 20:23:59.071975 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-07-06 20:23:59.071993 | orchestrator | Sunday 06 July 2025 20:23:06 +0000 (0:00:17.854) 0:02:13.433 *********** 2025-07-06 20:23:59.072010 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:59.072029 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:59.072051 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:59.072077 | orchestrator | 2025-07-06 20:23:59.072095 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-07-06 20:23:59.072112 | orchestrator | Sunday 06 July 2025 20:23:18 +0000 (0:00:12.186) 0:02:25.619 *********** 2025-07-06 20:23:59.072156 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:59.072174 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:59.072191 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:59.072207 | orchestrator | 2025-07-06 20:23:59.072224 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-07-06 20:23:59.072241 | orchestrator | Sunday 06 July 2025 20:23:25 +0000 (0:00:07.188) 0:02:32.808 *********** 2025-07-06 20:23:59.072260 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:59.072280 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:59.072298 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:59.072315 | orchestrator | 2025-07-06 20:23:59.072332 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-07-06 20:23:59.072349 | orchestrator | Sunday 06 July 2025 20:23:33 +0000 (0:00:07.481) 0:02:40.289 *********** 2025-07-06 20:23:59.072365 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:59.072382 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:59.072400 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:59.072416 | orchestrator | 2025-07-06 20:23:59.072432 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-07-06 20:23:59.072448 | orchestrator | Sunday 06 July 2025 20:23:39 +0000 (0:00:06.501) 0:02:46.791 *********** 2025-07-06 20:23:59.072465 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:23:59.072481 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:23:59.072508 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:59.072526 | orchestrator | 2025-07-06 20:23:59.072543 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-07-06 20:23:59.072560 | orchestrator | Sunday 06 July 2025 20:23:48 +0000 (0:00:08.839) 0:02:55.631 *********** 2025-07-06 20:23:59.072577 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:23:59.072616 | orchestrator | 2025-07-06 20:23:59.072634 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:23:59.072653 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:23:59.072672 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:23:59.072690 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:23:59.072707 | orchestrator | 2025-07-06 20:23:59.072725 | orchestrator | 2025-07-06 20:23:59.072760 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:23:59.072780 | orchestrator | Sunday 06 July 2025 20:23:56 +0000 (0:00:07.286) 0:03:02.918 *********** 2025-07-06 20:23:59.072798 | orchestrator | =============================================================================== 2025-07-06 20:23:59.072817 | orchestrator | designate : Copying over designate.conf -------------------------------- 21.87s 2025-07-06 20:23:59.072836 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 17.85s 2025-07-06 20:23:59.072854 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.58s 2025-07-06 20:23:59.072872 | orchestrator | designate : Restart designate-api container ---------------------------- 12.19s 2025-07-06 20:23:59.072890 | orchestrator | designate : Restart designate-worker container -------------------------- 8.84s 2025-07-06 20:23:59.072908 | orchestrator | designate : Restart designate-producer container ------------------------ 7.48s 2025-07-06 20:23:59.072927 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.41s 2025-07-06 20:23:59.072946 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.29s 2025-07-06 20:23:59.072964 | orchestrator | designate : Restart designate-central container ------------------------- 7.19s 2025-07-06 20:23:59.072983 | orchestrator | designate : Copying over config.json files for services ----------------- 6.73s 2025-07-06 20:23:59.072994 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.50s 2025-07-06 20:23:59.073005 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.12s 2025-07-06 20:23:59.073016 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.11s 2025-07-06 20:23:59.073027 | orchestrator | designate : Check designate containers ---------------------------------- 5.13s 2025-07-06 20:23:59.073038 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.86s 2025-07-06 20:23:59.073049 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.82s 2025-07-06 20:23:59.073059 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.77s 2025-07-06 20:23:59.073073 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.73s 2025-07-06 20:23:59.073092 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.64s 2025-07-06 20:23:59.073111 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.60s 2025-07-06 20:24:02.109224 | orchestrator | 2025-07-06 20:24:02 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:02.111492 | orchestrator | 2025-07-06 20:24:02 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:02.112923 | orchestrator | 2025-07-06 20:24:02 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:02.115515 | orchestrator | 2025-07-06 20:24:02 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:24:02.115618 | orchestrator | 2025-07-06 20:24:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:05.166433 | orchestrator | 2025-07-06 20:24:05 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:05.167826 | orchestrator | 2025-07-06 20:24:05 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:05.169910 | orchestrator | 2025-07-06 20:24:05 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:05.171697 | orchestrator | 2025-07-06 20:24:05 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:24:05.171730 | orchestrator | 2025-07-06 20:24:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:08.214810 | orchestrator | 2025-07-06 20:24:08 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:08.217333 | orchestrator | 2025-07-06 20:24:08 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:08.219449 | orchestrator | 2025-07-06 20:24:08 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:08.221461 | orchestrator | 2025-07-06 20:24:08 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state STARTED 2025-07-06 20:24:08.221728 | orchestrator | 2025-07-06 20:24:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:11.276182 | orchestrator | 2025-07-06 20:24:11 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:11.277216 | orchestrator | 2025-07-06 20:24:11 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:11.279054 | orchestrator | 2025-07-06 20:24:11 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:11.283748 | orchestrator | 2025-07-06 20:24:11 | INFO  | Task 4b37e11b-e68c-4889-9640-2f8424eb8826 is in state SUCCESS 2025-07-06 20:24:11.286481 | orchestrator | 2025-07-06 20:24:11.286543 | orchestrator | 2025-07-06 20:24:11.286563 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:24:11.286577 | orchestrator | 2025-07-06 20:24:11.286589 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:24:11.286600 | orchestrator | Sunday 06 July 2025 20:20:47 +0000 (0:00:00.401) 0:00:00.401 *********** 2025-07-06 20:24:11.286612 | orchestrator | ok: [testbed-manager] 2025-07-06 20:24:11.286624 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:24:11.286636 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:24:11.286643 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:24:11.286649 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:24:11.286656 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:24:11.286663 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:24:11.286670 | orchestrator | 2025-07-06 20:24:11.286677 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:24:11.286684 | orchestrator | Sunday 06 July 2025 20:20:47 +0000 (0:00:00.727) 0:00:01.128 *********** 2025-07-06 20:24:11.286691 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-07-06 20:24:11.286698 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-07-06 20:24:11.286705 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-07-06 20:24:11.286711 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-07-06 20:24:11.286718 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-07-06 20:24:11.286725 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-07-06 20:24:11.286736 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-07-06 20:24:11.286743 | orchestrator | 2025-07-06 20:24:11.286749 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-07-06 20:24:11.286756 | orchestrator | 2025-07-06 20:24:11.286763 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-06 20:24:11.286769 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:00.823) 0:00:01.952 *********** 2025-07-06 20:24:11.286800 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:24:11.286809 | orchestrator | 2025-07-06 20:24:11.286816 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-07-06 20:24:11.286822 | orchestrator | Sunday 06 July 2025 20:20:49 +0000 (0:00:01.365) 0:00:03.318 *********** 2025-07-06 20:24:11.286831 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:24:11.286842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.286861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.286869 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.286890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.286898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.286906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.286919 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.286933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.286945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.286961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.286974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.286986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.286993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287007 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:24:11.287021 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287049 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287083 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287250 | orchestrator | 2025-07-06 20:24:11.287263 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-06 20:24:11.287290 | orchestrator | Sunday 06 July 2025 20:20:53 +0000 (0:00:03.578) 0:00:06.897 *********** 2025-07-06 20:24:11.287303 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:24:11.287313 | orchestrator | 2025-07-06 20:24:11.287320 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-07-06 20:24:11.287327 | orchestrator | Sunday 06 July 2025 20:20:54 +0000 (0:00:01.389) 0:00:08.286 *********** 2025-07-06 20:24:11.287334 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:24:11.287342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.287351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.287367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.287392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.287408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.287415 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.287422 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.287429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287462 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287478 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287563 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:24:11.287582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287590 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.287626 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.287636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.288297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.288375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.288393 | orchestrator | 2025-07-06 20:24:11.288406 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-07-06 20:24:11.288419 | orchestrator | Sunday 06 July 2025 20:21:00 +0000 (0:00:05.235) 0:00:13.522 *********** 2025-07-06 20:24:11.288437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.288456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.288468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.288481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.288511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.288545 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.288572 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-06 20:24:11.288585 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.288596 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.288608 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-06 20:24:11.288624 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.288635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.288660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.288679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.288691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.288702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.288714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.288725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.288736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.288747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.288770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.288782 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:24:11.288793 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.288804 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.288831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.288843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.288855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.288866 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.288878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.288889 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.288901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.288918 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.288934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.288946 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.288968 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.288979 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.288990 | orchestrator | 2025-07-06 20:24:11.289002 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-07-06 20:24:11.289014 | orchestrator | Sunday 06 July 2025 20:21:01 +0000 (0:00:01.274) 0:00:14.796 *********** 2025-07-06 20:24:11.289026 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-06 20:24:11.289038 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.289049 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.289072 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-06 20:24:11.289085 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.289103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.289115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.289126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.289184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.289206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.289218 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:24:11.289230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.289247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.289258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.289276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.289288 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.289299 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.289310 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.289321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.289333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.289350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.289362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.289378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-06 20:24:11.289390 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.289407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.289419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.289430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.289441 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.289461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.289488 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.289504 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.289523 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.289542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-06 20:24:11.289567 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.289596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-06 20:24:11.289614 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.289631 | orchestrator | 2025-07-06 20:24:11.289649 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-07-06 20:24:11.289668 | orchestrator | Sunday 06 July 2025 20:21:03 +0000 (0:00:01.694) 0:00:16.491 *********** 2025-07-06 20:24:11.289686 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:24:11.289718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.289738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.289757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.289778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.289804 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.289829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.289842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.289853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.289876 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.289895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.289911 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.289928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.289953 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.289983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.290003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.290116 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:24:11.290200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.290216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.290228 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.290246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.290280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.290292 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.290312 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.290324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.290336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.290347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.290359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.290375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.290387 | orchestrator | 2025-07-06 20:24:11.290398 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-07-06 20:24:11.290410 | orchestrator | Sunday 06 July 2025 20:21:09 +0000 (0:00:05.988) 0:00:22.480 *********** 2025-07-06 20:24:11.290421 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:24:11.290432 | orchestrator | 2025-07-06 20:24:11.290444 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-07-06 20:24:11.290468 | orchestrator | Sunday 06 July 2025 20:21:09 +0000 (0:00:00.796) 0:00:23.276 *********** 2025-07-06 20:24:11.290488 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090569, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7311976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290500 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090569, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7311976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290511 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090569, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7311976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290522 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090569, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7311976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.290534 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090569, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7311976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290550 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090569, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7311976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290570 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090558, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290587 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090558, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290599 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090569, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7311976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290611 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090558, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290622 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090558, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290633 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090525, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7211974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290651 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090525, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7211974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290668 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090558, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290686 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090558, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290697 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090536, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7221975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290709 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090536, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7221975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290720 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090525, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7211974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290731 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090525, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7211974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290751 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1090558, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.290763 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090525, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7211974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290788 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090525, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7211974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290799 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090553, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290811 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090553, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290822 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090536, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7221975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290833 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090536, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7221975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290849 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090536, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7221975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290867 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090536, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7221975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290884 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090540, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7241974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290895 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090540, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7241974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290907 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090553, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290918 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090553, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290929 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090553, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290945 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090553, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290963 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090551, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.290980 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1090525, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7211974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.290992 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090551, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291003 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090540, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7241974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291015 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090559, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291026 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090540, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7241974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291042 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090559, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291060 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090551, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291078 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090540, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7241974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291089 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090540, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7241974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291100 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090559, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291111 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090566, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7301977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291123 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090566, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7301977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291167 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090551, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291179 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090551, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291239 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090551, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291253 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090566, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7301977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291264 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1090536, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7221975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.291275 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090601, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7371976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291286 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090601, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7371976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291304 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090601, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7371976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291320 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090559, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291340 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090559, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291351 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090559, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291363 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090562, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291374 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090566, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7301977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291385 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090562, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291402 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090562, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291419 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090566, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7301977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291436 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090601, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7371976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291448 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090566, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7301977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291459 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1090553, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.291470 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090538, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7231975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291481 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090601, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7371976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291499 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090538, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7231975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291518 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090538, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7231975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291537 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090562, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291549 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090601, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7371976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291560 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090547, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7261975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291571 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090562, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291589 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090547, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7261975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291601 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090547, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7261975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291617 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090562, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291634 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090538, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7231975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291646 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090538, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7231975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291657 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1090540, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7241974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.291668 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090523, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7131975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291693 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090523, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7131975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291704 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090556, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291721 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090523, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7131975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291740 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090556, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291752 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090547, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7261975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291763 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090538, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7231975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291774 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090547, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7261975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291792 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090523, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7131975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291803 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090599, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7361977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291819 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1090551, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7271974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.291836 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090556, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291848 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090556, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291859 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090599, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7361977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291877 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090547, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7261975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291888 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090523, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7131975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291899 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090599, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7361977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291915 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090543, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7251976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291934 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090523, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7131975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291945 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090599, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7361977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291956 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090543, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7251976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291974 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090543, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7251976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291985 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090573, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7321975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.291996 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090556, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292007 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.292023 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090556, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292040 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090543, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'm2025-07-06 20:24:11 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:11.292053 | orchestrator | 2025-07-06 20:24:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:11.292222 | orchestrator | time': 1751760153.0, 'ctime': 1751830647.7251976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292248 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090573, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7321975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292277 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.292297 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1090559, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292318 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090573, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7321975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292337 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.292354 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090599, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7361977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292373 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090599, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7361977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292385 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090573, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7321975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292396 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.292416 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090543, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7251976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292428 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090543, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7251976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292446 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090573, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7321975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292458 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.292469 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090573, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7321975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-06 20:24:11.292480 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.292492 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1090566, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7301977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292508 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1090601, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7371976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292520 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1090562, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7291975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292538 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090538, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7231975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292560 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1090547, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7261975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292571 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1090523, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7131975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292582 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1090556, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7281976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292593 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1090599, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7361977, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292609 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1090543, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7251976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292621 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1090573, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.7321975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-06 20:24:11.292632 | orchestrator | 2025-07-06 20:24:11.292644 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-07-06 20:24:11.292656 | orchestrator | Sunday 06 July 2025 20:21:34 +0000 (0:00:24.954) 0:00:48.230 *********** 2025-07-06 20:24:11.292679 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:24:11.292690 | orchestrator | 2025-07-06 20:24:11.292702 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-07-06 20:24:11.292712 | orchestrator | Sunday 06 July 2025 20:21:36 +0000 (0:00:01.325) 0:00:49.556 *********** 2025-07-06 20:24:11.292723 | orchestrator | [WARNING]: Skipped 2025-07-06 20:24:11.292733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292743 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-07-06 20:24:11.292753 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292762 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-07-06 20:24:11.292772 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:24:11.292781 | orchestrator | [WARNING]: Skipped 2025-07-06 20:24:11.292791 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292801 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-07-06 20:24:11.292810 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292820 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-07-06 20:24:11.292829 | orchestrator | [WARNING]: Skipped 2025-07-06 20:24:11.292839 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292849 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-07-06 20:24:11.292858 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292868 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-07-06 20:24:11.292877 | orchestrator | [WARNING]: Skipped 2025-07-06 20:24:11.292887 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292896 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-07-06 20:24:11.292906 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292916 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-07-06 20:24:11.292925 | orchestrator | [WARNING]: Skipped 2025-07-06 20:24:11.292935 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292944 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-07-06 20:24:11.292954 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292964 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-07-06 20:24:11.292973 | orchestrator | [WARNING]: Skipped 2025-07-06 20:24:11.292983 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.292993 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-07-06 20:24:11.293002 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.293012 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-07-06 20:24:11.293021 | orchestrator | [WARNING]: Skipped 2025-07-06 20:24:11.293031 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.293040 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-07-06 20:24:11.293050 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-06 20:24:11.293060 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-07-06 20:24:11.293069 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:24:11.293079 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-06 20:24:11.293089 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 20:24:11.293098 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-06 20:24:11.293108 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-06 20:24:11.293117 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-06 20:24:11.293156 | orchestrator | 2025-07-06 20:24:11.293167 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-07-06 20:24:11.293177 | orchestrator | Sunday 06 July 2025 20:21:38 +0000 (0:00:02.002) 0:00:51.559 *********** 2025-07-06 20:24:11.293186 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:24:11.293197 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:24:11.293212 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.293222 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.293232 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:24:11.293241 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.293251 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:24:11.293261 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.293270 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:24:11.293280 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.293289 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-06 20:24:11.293299 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.293309 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-07-06 20:24:11.293318 | orchestrator | 2025-07-06 20:24:11.293328 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-07-06 20:24:11.293338 | orchestrator | Sunday 06 July 2025 20:22:03 +0000 (0:00:25.584) 0:01:17.144 *********** 2025-07-06 20:24:11.293353 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:24:11.293363 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:24:11.293373 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.293383 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.293392 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:24:11.293401 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.293412 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:24:11.293421 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.293431 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:24:11.293440 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.293450 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-06 20:24:11.293459 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.293469 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-07-06 20:24:11.293478 | orchestrator | 2025-07-06 20:24:11.293487 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-07-06 20:24:11.293497 | orchestrator | Sunday 06 July 2025 20:22:09 +0000 (0:00:05.695) 0:01:22.839 *********** 2025-07-06 20:24:11.293507 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:24:11.293517 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.293527 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:24:11.293537 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.293546 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:24:11.293562 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.293573 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-07-06 20:24:11.293582 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:24:11.293592 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.293602 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:24:11.293611 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.293621 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-06 20:24:11.293630 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.293640 | orchestrator | 2025-07-06 20:24:11.293650 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-07-06 20:24:11.293659 | orchestrator | Sunday 06 July 2025 20:22:11 +0000 (0:00:01.752) 0:01:24.592 *********** 2025-07-06 20:24:11.293669 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:24:11.293678 | orchestrator | 2025-07-06 20:24:11.293688 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-07-06 20:24:11.293698 | orchestrator | Sunday 06 July 2025 20:22:12 +0000 (0:00:01.296) 0:01:25.888 *********** 2025-07-06 20:24:11.293707 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:24:11.293716 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.293726 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.293735 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.293745 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.293754 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.293764 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.293774 | orchestrator | 2025-07-06 20:24:11.293783 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-07-06 20:24:11.293793 | orchestrator | Sunday 06 July 2025 20:22:13 +0000 (0:00:00.997) 0:01:26.886 *********** 2025-07-06 20:24:11.293802 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:24:11.293817 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.293827 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.293837 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.293846 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:24:11.293855 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:24:11.293865 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:11.293874 | orchestrator | 2025-07-06 20:24:11.293884 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-07-06 20:24:11.293893 | orchestrator | Sunday 06 July 2025 20:22:15 +0000 (0:00:02.038) 0:01:28.924 *********** 2025-07-06 20:24:11.293903 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:24:11.293912 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.293922 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:24:11.293932 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.293941 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:24:11.293951 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:24:11.293960 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:24:11.293970 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.293985 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:24:11.293995 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.294005 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:24:11.294015 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.294080 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-06 20:24:11.294090 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.294100 | orchestrator | 2025-07-06 20:24:11.294110 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-07-06 20:24:11.294120 | orchestrator | Sunday 06 July 2025 20:22:17 +0000 (0:00:02.030) 0:01:30.955 *********** 2025-07-06 20:24:11.294130 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:24:11.294157 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:24:11.294167 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.294177 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.294186 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-07-06 20:24:11.294196 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:24:11.294206 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.294215 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:24:11.294225 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.294234 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:24:11.294244 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.294254 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-06 20:24:11.294263 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.294273 | orchestrator | 2025-07-06 20:24:11.294283 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-07-06 20:24:11.294293 | orchestrator | Sunday 06 July 2025 20:22:20 +0000 (0:00:02.906) 0:01:33.861 *********** 2025-07-06 20:24:11.294302 | orchestrator | [WARNING]: Skipped 2025-07-06 20:24:11.294312 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-07-06 20:24:11.294322 | orchestrator | due to this access issue: 2025-07-06 20:24:11.294332 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-07-06 20:24:11.294341 | orchestrator | not a directory 2025-07-06 20:24:11.294350 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-06 20:24:11.294360 | orchestrator | 2025-07-06 20:24:11.294370 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-07-06 20:24:11.294379 | orchestrator | Sunday 06 July 2025 20:22:22 +0000 (0:00:02.144) 0:01:36.005 *********** 2025-07-06 20:24:11.294389 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:24:11.294398 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.294408 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.294417 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.294427 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.294436 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.294446 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.294455 | orchestrator | 2025-07-06 20:24:11.294465 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-07-06 20:24:11.294475 | orchestrator | Sunday 06 July 2025 20:22:23 +0000 (0:00:00.767) 0:01:36.773 *********** 2025-07-06 20:24:11.294484 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:24:11.294494 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:24:11.294503 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:24:11.294513 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:24:11.294522 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:24:11.294532 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:24:11.294541 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:24:11.294558 | orchestrator | 2025-07-06 20:24:11.294568 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-07-06 20:24:11.294578 | orchestrator | Sunday 06 July 2025 20:22:24 +0000 (0:00:01.194) 0:01:37.967 *********** 2025-07-06 20:24:11.294593 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-06 20:24:11.294612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.294622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.294633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.294643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.294653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.294663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.294683 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-06 20:24:11.294694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.294709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.294720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.294731 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.294741 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.294751 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.294761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.294782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.294799 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-06 20:24:11.294811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.294822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.294832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.294842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.294858 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.294876 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.294886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.294902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.294912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-06 20:24:11.294923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.294933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.294943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-06 20:24:11.294958 | orchestrator | 2025-07-06 20:24:11.294968 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-07-06 20:24:11.294978 | orchestrator | Sunday 06 July 2025 20:22:29 +0000 (0:00:04.822) 0:01:42.789 *********** 2025-07-06 20:24:11.294988 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-06 20:24:11.295003 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:24:11.295020 | orchestrator | 2025-07-06 20:24:11.295036 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:24:11.295051 | orchestrator | Sunday 06 July 2025 20:22:30 +0000 (0:00:01.522) 0:01:44.312 *********** 2025-07-06 20:24:11.295066 | orchestrator | 2025-07-06 20:24:11.295082 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:24:11.295097 | orchestrator | Sunday 06 July 2025 20:22:31 +0000 (0:00:00.220) 0:01:44.532 *********** 2025-07-06 20:24:11.295111 | orchestrator | 2025-07-06 20:24:11.295127 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:24:11.295170 | orchestrator | Sunday 06 July 2025 20:22:31 +0000 (0:00:00.049) 0:01:44.581 *********** 2025-07-06 20:24:11.295186 | orchestrator | 2025-07-06 20:24:11.295202 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:24:11.295218 | orchestrator | Sunday 06 July 2025 20:22:31 +0000 (0:00:00.047) 0:01:44.629 *********** 2025-07-06 20:24:11.295234 | orchestrator | 2025-07-06 20:24:11.295249 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:24:11.295266 | orchestrator | Sunday 06 July 2025 20:22:31 +0000 (0:00:00.053) 0:01:44.682 *********** 2025-07-06 20:24:11.295283 | orchestrator | 2025-07-06 20:24:11.295298 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:24:11.295315 | orchestrator | Sunday 06 July 2025 20:22:31 +0000 (0:00:00.049) 0:01:44.731 *********** 2025-07-06 20:24:11.295327 | orchestrator | 2025-07-06 20:24:11.295337 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-06 20:24:11.295347 | orchestrator | Sunday 06 July 2025 20:22:31 +0000 (0:00:00.050) 0:01:44.781 *********** 2025-07-06 20:24:11.295356 | orchestrator | 2025-07-06 20:24:11.295366 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-07-06 20:24:11.295375 | orchestrator | Sunday 06 July 2025 20:22:31 +0000 (0:00:00.071) 0:01:44.853 *********** 2025-07-06 20:24:11.295385 | orchestrator | changed: [testbed-manager] 2025-07-06 20:24:11.295394 | orchestrator | 2025-07-06 20:24:11.295404 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-07-06 20:24:11.295422 | orchestrator | Sunday 06 July 2025 20:22:47 +0000 (0:00:16.163) 0:02:01.016 *********** 2025-07-06 20:24:11.295432 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:24:11.295442 | orchestrator | changed: [testbed-manager] 2025-07-06 20:24:11.295452 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:11.295461 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:24:11.295470 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:24:11.295480 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:24:11.295489 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:24:11.295499 | orchestrator | 2025-07-06 20:24:11.295508 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-07-06 20:24:11.295518 | orchestrator | Sunday 06 July 2025 20:23:05 +0000 (0:00:17.363) 0:02:18.380 *********** 2025-07-06 20:24:11.295527 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:24:11.295537 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:11.295546 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:24:11.295555 | orchestrator | 2025-07-06 20:24:11.295565 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-07-06 20:24:11.295585 | orchestrator | Sunday 06 July 2025 20:23:12 +0000 (0:00:07.471) 0:02:25.851 *********** 2025-07-06 20:24:11.295594 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:24:11.295604 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:24:11.295613 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:11.295623 | orchestrator | 2025-07-06 20:24:11.295632 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-07-06 20:24:11.295642 | orchestrator | Sunday 06 July 2025 20:23:25 +0000 (0:00:12.590) 0:02:38.442 *********** 2025-07-06 20:24:11.295652 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:24:11.295661 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:24:11.295670 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:24:11.295680 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:11.295689 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:24:11.295699 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:24:11.295708 | orchestrator | changed: [testbed-manager] 2025-07-06 20:24:11.295718 | orchestrator | 2025-07-06 20:24:11.295727 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-07-06 20:24:11.295737 | orchestrator | Sunday 06 July 2025 20:23:34 +0000 (0:00:09.631) 0:02:48.073 *********** 2025-07-06 20:24:11.295746 | orchestrator | changed: [testbed-manager] 2025-07-06 20:24:11.295756 | orchestrator | 2025-07-06 20:24:11.295765 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-07-06 20:24:11.295775 | orchestrator | Sunday 06 July 2025 20:23:47 +0000 (0:00:12.933) 0:03:01.006 *********** 2025-07-06 20:24:11.295784 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:24:11.295794 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:24:11.295803 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:24:11.295812 | orchestrator | 2025-07-06 20:24:11.295822 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-07-06 20:24:11.295832 | orchestrator | Sunday 06 July 2025 20:23:52 +0000 (0:00:05.014) 0:03:06.021 *********** 2025-07-06 20:24:11.295841 | orchestrator | changed: [testbed-manager] 2025-07-06 20:24:11.295850 | orchestrator | 2025-07-06 20:24:11.295860 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-07-06 20:24:11.295870 | orchestrator | Sunday 06 July 2025 20:23:58 +0000 (0:00:06.129) 0:03:12.150 *********** 2025-07-06 20:24:11.295879 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:24:11.295889 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:24:11.295899 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:24:11.295909 | orchestrator | 2025-07-06 20:24:11.295918 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:24:11.295928 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:24:11.295939 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:24:11.295949 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:24:11.295959 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:24:11.295974 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-06 20:24:11.295984 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-06 20:24:11.295993 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-06 20:24:11.296010 | orchestrator | 2025-07-06 20:24:11.296020 | orchestrator | 2025-07-06 20:24:11.296029 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:24:11.296039 | orchestrator | Sunday 06 July 2025 20:24:09 +0000 (0:00:10.612) 0:03:22.762 *********** 2025-07-06 20:24:11.296048 | orchestrator | =============================================================================== 2025-07-06 20:24:11.296058 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 25.58s 2025-07-06 20:24:11.296067 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.95s 2025-07-06 20:24:11.296077 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 17.36s 2025-07-06 20:24:11.296086 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.16s 2025-07-06 20:24:11.296102 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.93s 2025-07-06 20:24:11.296111 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.59s 2025-07-06 20:24:11.296121 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.61s 2025-07-06 20:24:11.296130 | orchestrator | prometheus : Restart prometheus-cadvisor container ---------------------- 9.63s 2025-07-06 20:24:11.296189 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 7.47s 2025-07-06 20:24:11.296199 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.13s 2025-07-06 20:24:11.296209 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.99s 2025-07-06 20:24:11.296219 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.70s 2025-07-06 20:24:11.296228 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.24s 2025-07-06 20:24:11.296238 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.01s 2025-07-06 20:24:11.296247 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.82s 2025-07-06 20:24:11.296257 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.58s 2025-07-06 20:24:11.296267 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.91s 2025-07-06 20:24:11.296277 | orchestrator | prometheus : Find extra prometheus server config files ------------------ 2.14s 2025-07-06 20:24:11.296286 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.04s 2025-07-06 20:24:11.296296 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.03s 2025-07-06 20:24:14.347236 | orchestrator | 2025-07-06 20:24:14 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:14.348508 | orchestrator | 2025-07-06 20:24:14 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:14.349201 | orchestrator | 2025-07-06 20:24:14 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:14.351515 | orchestrator | 2025-07-06 20:24:14 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:14.351551 | orchestrator | 2025-07-06 20:24:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:17.380436 | orchestrator | 2025-07-06 20:24:17 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:17.384329 | orchestrator | 2025-07-06 20:24:17 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:17.388333 | orchestrator | 2025-07-06 20:24:17 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:17.388393 | orchestrator | 2025-07-06 20:24:17 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:17.388695 | orchestrator | 2025-07-06 20:24:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:20.427678 | orchestrator | 2025-07-06 20:24:20 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:20.428768 | orchestrator | 2025-07-06 20:24:20 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:20.430298 | orchestrator | 2025-07-06 20:24:20 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:20.431723 | orchestrator | 2025-07-06 20:24:20 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:20.431762 | orchestrator | 2025-07-06 20:24:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:23.481814 | orchestrator | 2025-07-06 20:24:23 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:23.483063 | orchestrator | 2025-07-06 20:24:23 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:23.484442 | orchestrator | 2025-07-06 20:24:23 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:23.486246 | orchestrator | 2025-07-06 20:24:23 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:23.486277 | orchestrator | 2025-07-06 20:24:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:26.534832 | orchestrator | 2025-07-06 20:24:26 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:26.536073 | orchestrator | 2025-07-06 20:24:26 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:26.537416 | orchestrator | 2025-07-06 20:24:26 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:26.538748 | orchestrator | 2025-07-06 20:24:26 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:26.538781 | orchestrator | 2025-07-06 20:24:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:29.583913 | orchestrator | 2025-07-06 20:24:29 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:29.585577 | orchestrator | 2025-07-06 20:24:29 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:29.585610 | orchestrator | 2025-07-06 20:24:29 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:29.588409 | orchestrator | 2025-07-06 20:24:29 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:29.588433 | orchestrator | 2025-07-06 20:24:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:32.634826 | orchestrator | 2025-07-06 20:24:32 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:32.635850 | orchestrator | 2025-07-06 20:24:32 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:32.636258 | orchestrator | 2025-07-06 20:24:32 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:32.639773 | orchestrator | 2025-07-06 20:24:32 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:32.639818 | orchestrator | 2025-07-06 20:24:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:35.685656 | orchestrator | 2025-07-06 20:24:35 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:35.687553 | orchestrator | 2025-07-06 20:24:35 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:35.690331 | orchestrator | 2025-07-06 20:24:35 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:35.692661 | orchestrator | 2025-07-06 20:24:35 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:35.692725 | orchestrator | 2025-07-06 20:24:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:38.736336 | orchestrator | 2025-07-06 20:24:38 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:38.737614 | orchestrator | 2025-07-06 20:24:38 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:38.739733 | orchestrator | 2025-07-06 20:24:38 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:38.741691 | orchestrator | 2025-07-06 20:24:38 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:38.741720 | orchestrator | 2025-07-06 20:24:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:41.785693 | orchestrator | 2025-07-06 20:24:41 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:41.787489 | orchestrator | 2025-07-06 20:24:41 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:41.788974 | orchestrator | 2025-07-06 20:24:41 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:41.790540 | orchestrator | 2025-07-06 20:24:41 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:41.790579 | orchestrator | 2025-07-06 20:24:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:44.831574 | orchestrator | 2025-07-06 20:24:44 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:44.835581 | orchestrator | 2025-07-06 20:24:44 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:44.836624 | orchestrator | 2025-07-06 20:24:44 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:44.837192 | orchestrator | 2025-07-06 20:24:44 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:44.837230 | orchestrator | 2025-07-06 20:24:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:47.880482 | orchestrator | 2025-07-06 20:24:47 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:47.882154 | orchestrator | 2025-07-06 20:24:47 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:47.883338 | orchestrator | 2025-07-06 20:24:47 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:47.885608 | orchestrator | 2025-07-06 20:24:47 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:47.885645 | orchestrator | 2025-07-06 20:24:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:50.928830 | orchestrator | 2025-07-06 20:24:50 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:50.931406 | orchestrator | 2025-07-06 20:24:50 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:50.931811 | orchestrator | 2025-07-06 20:24:50 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:50.933520 | orchestrator | 2025-07-06 20:24:50 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:50.933610 | orchestrator | 2025-07-06 20:24:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:53.995944 | orchestrator | 2025-07-06 20:24:53 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:53.996042 | orchestrator | 2025-07-06 20:24:53 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:53.997508 | orchestrator | 2025-07-06 20:24:53 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:53.998812 | orchestrator | 2025-07-06 20:24:53 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:53.999986 | orchestrator | 2025-07-06 20:24:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:24:57.043389 | orchestrator | 2025-07-06 20:24:57 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:24:57.045038 | orchestrator | 2025-07-06 20:24:57 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:24:57.045086 | orchestrator | 2025-07-06 20:24:57 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:24:57.046136 | orchestrator | 2025-07-06 20:24:57 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:24:57.046191 | orchestrator | 2025-07-06 20:24:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:00.085292 | orchestrator | 2025-07-06 20:25:00 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:00.086861 | orchestrator | 2025-07-06 20:25:00 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:25:00.088246 | orchestrator | 2025-07-06 20:25:00 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:00.089150 | orchestrator | 2025-07-06 20:25:00 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:00.089197 | orchestrator | 2025-07-06 20:25:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:03.128498 | orchestrator | 2025-07-06 20:25:03 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:03.130460 | orchestrator | 2025-07-06 20:25:03 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:25:03.131912 | orchestrator | 2025-07-06 20:25:03 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:03.133506 | orchestrator | 2025-07-06 20:25:03 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:03.133560 | orchestrator | 2025-07-06 20:25:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:06.186646 | orchestrator | 2025-07-06 20:25:06 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:06.188088 | orchestrator | 2025-07-06 20:25:06 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state STARTED 2025-07-06 20:25:06.190388 | orchestrator | 2025-07-06 20:25:06 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:06.191458 | orchestrator | 2025-07-06 20:25:06 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:06.191715 | orchestrator | 2025-07-06 20:25:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:09.227353 | orchestrator | 2025-07-06 20:25:09 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:09.229970 | orchestrator | 2025-07-06 20:25:09 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:09.230265 | orchestrator | 2025-07-06 20:25:09 | INFO  | Task 7ed6bfb2-0ab7-45ef-9c8c-b38b42243d34 is in state SUCCESS 2025-07-06 20:25:09.231820 | orchestrator | 2025-07-06 20:25:09.231853 | orchestrator | 2025-07-06 20:25:09.231865 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:25:09.231876 | orchestrator | 2025-07-06 20:25:09.231887 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:25:09.231898 | orchestrator | Sunday 06 July 2025 20:24:01 +0000 (0:00:00.293) 0:00:00.293 *********** 2025-07-06 20:25:09.231931 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:09.231944 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:09.231955 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:09.231965 | orchestrator | 2025-07-06 20:25:09.231976 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:25:09.231987 | orchestrator | Sunday 06 July 2025 20:24:01 +0000 (0:00:00.295) 0:00:00.589 *********** 2025-07-06 20:25:09.231998 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-07-06 20:25:09.232012 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-07-06 20:25:09.232031 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-07-06 20:25:09.232049 | orchestrator | 2025-07-06 20:25:09.232067 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-07-06 20:25:09.232086 | orchestrator | 2025-07-06 20:25:09.232105 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-06 20:25:09.232125 | orchestrator | Sunday 06 July 2025 20:24:01 +0000 (0:00:00.453) 0:00:01.043 *********** 2025-07-06 20:25:09.232137 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:09.232148 | orchestrator | 2025-07-06 20:25:09.232159 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-07-06 20:25:09.232225 | orchestrator | Sunday 06 July 2025 20:24:02 +0000 (0:00:00.608) 0:00:01.651 *********** 2025-07-06 20:25:09.232252 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-07-06 20:25:09.232269 | orchestrator | 2025-07-06 20:25:09.232287 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-07-06 20:25:09.232306 | orchestrator | Sunday 06 July 2025 20:24:06 +0000 (0:00:03.596) 0:00:05.247 *********** 2025-07-06 20:25:09.232323 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-07-06 20:25:09.232340 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-07-06 20:25:09.232359 | orchestrator | 2025-07-06 20:25:09.232378 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-07-06 20:25:09.232396 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:06.157) 0:00:11.405 *********** 2025-07-06 20:25:09.232416 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:25:09.232435 | orchestrator | 2025-07-06 20:25:09.232453 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-07-06 20:25:09.232467 | orchestrator | Sunday 06 July 2025 20:24:15 +0000 (0:00:02.766) 0:00:14.171 *********** 2025-07-06 20:25:09.232480 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:25:09.232490 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-07-06 20:25:09.232501 | orchestrator | 2025-07-06 20:25:09.232512 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-07-06 20:25:09.232523 | orchestrator | Sunday 06 July 2025 20:24:18 +0000 (0:00:03.617) 0:00:17.789 *********** 2025-07-06 20:25:09.232533 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:25:09.232544 | orchestrator | 2025-07-06 20:25:09.232554 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-07-06 20:25:09.232565 | orchestrator | Sunday 06 July 2025 20:24:21 +0000 (0:00:02.965) 0:00:20.755 *********** 2025-07-06 20:25:09.232576 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-07-06 20:25:09.232586 | orchestrator | 2025-07-06 20:25:09.232597 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-06 20:25:09.232607 | orchestrator | Sunday 06 July 2025 20:24:25 +0000 (0:00:04.069) 0:00:24.824 *********** 2025-07-06 20:25:09.232618 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:09.232628 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:09.232639 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:09.232649 | orchestrator | 2025-07-06 20:25:09.232672 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-07-06 20:25:09.232683 | orchestrator | Sunday 06 July 2025 20:24:26 +0000 (0:00:00.325) 0:00:25.150 *********** 2025-07-06 20:25:09.232711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.232741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.232754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.232765 | orchestrator | 2025-07-06 20:25:09.232776 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-07-06 20:25:09.232787 | orchestrator | Sunday 06 July 2025 20:24:26 +0000 (0:00:00.882) 0:00:26.032 *********** 2025-07-06 20:25:09.232798 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:09.232809 | orchestrator | 2025-07-06 20:25:09.232819 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-07-06 20:25:09.232830 | orchestrator | Sunday 06 July 2025 20:24:27 +0000 (0:00:00.143) 0:00:26.176 *********** 2025-07-06 20:25:09.232840 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:09.232851 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:09.232862 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:09.232872 | orchestrator | 2025-07-06 20:25:09.232883 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-06 20:25:09.232894 | orchestrator | Sunday 06 July 2025 20:24:27 +0000 (0:00:00.450) 0:00:26.626 *********** 2025-07-06 20:25:09.232904 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:09.232921 | orchestrator | 2025-07-06 20:25:09.232932 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-07-06 20:25:09.232943 | orchestrator | Sunday 06 July 2025 20:24:28 +0000 (0:00:00.499) 0:00:27.126 *********** 2025-07-06 20:25:09.232959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.232979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.232991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.233002 | orchestrator | 2025-07-06 20:25:09.233012 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-07-06 20:25:09.233023 | orchestrator | Sunday 06 July 2025 20:24:29 +0000 (0:00:01.454) 0:00:28.580 *********** 2025-07-06 20:25:09.233034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:25:09.233052 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:09.233063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:25:09.233074 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:09.233097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:25:09.233123 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:09.233150 | orchestrator | 2025-07-06 20:25:09.233167 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-07-06 20:25:09.233212 | orchestrator | Sunday 06 July 2025 20:24:30 +0000 (0:00:00.687) 0:00:29.267 *********** 2025-07-06 20:25:09.233231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:25:09.233247 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:09.233266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:25:09.233298 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:09.233316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:25:09.233336 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:09.233355 | orchestrator | 2025-07-06 20:25:09.233372 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-07-06 20:25:09.233391 | orchestrator | Sunday 06 July 2025 20:24:30 +0000 (0:00:00.672) 0:00:29.939 *********** 2025-07-06 20:25:09.233543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.233562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.233574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.233593 | orchestrator | 2025-07-06 20:25:09.233605 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-07-06 20:25:09.233616 | orchestrator | Sunday 06 July 2025 20:24:32 +0000 (0:00:01.270) 0:00:31.210 *********** 2025-07-06 20:25:09.233627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.233643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.233663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.233675 | orchestrator | 2025-07-06 20:25:09.233686 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-07-06 20:25:09.233697 | orchestrator | Sunday 06 July 2025 20:24:34 +0000 (0:00:02.349) 0:00:33.559 *********** 2025-07-06 20:25:09.233708 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-06 20:25:09.233720 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-06 20:25:09.233731 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-06 20:25:09.233742 | orchestrator | 2025-07-06 20:25:09.233752 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-07-06 20:25:09.233771 | orchestrator | Sunday 06 July 2025 20:24:36 +0000 (0:00:01.543) 0:00:35.103 *********** 2025-07-06 20:25:09.233782 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:09.233793 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:09.233804 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:09.233814 | orchestrator | 2025-07-06 20:25:09.233825 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-07-06 20:25:09.233836 | orchestrator | Sunday 06 July 2025 20:24:37 +0000 (0:00:01.313) 0:00:36.416 *********** 2025-07-06 20:25:09.233848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:25:09.233859 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:09.233870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:25:09.233882 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:09.233909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-06 20:25:09.233921 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:09.233932 | orchestrator | 2025-07-06 20:25:09.233943 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-07-06 20:25:09.233954 | orchestrator | Sunday 06 July 2025 20:24:37 +0000 (0:00:00.448) 0:00:36.865 *********** 2025-07-06 20:25:09.233966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.233983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.233995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-06 20:25:09.234006 | orchestrator | 2025-07-06 20:25:09.234056 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-07-06 20:25:09.234070 | orchestrator | Sunday 06 July 2025 20:24:39 +0000 (0:00:01.510) 0:00:38.375 *********** 2025-07-06 20:25:09.234081 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:09.234092 | orchestrator | 2025-07-06 20:25:09.234103 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-07-06 20:25:09.234114 | orchestrator | Sunday 06 July 2025 20:24:41 +0000 (0:00:02.064) 0:00:40.439 *********** 2025-07-06 20:25:09.234130 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:09.234141 | orchestrator | 2025-07-06 20:25:09.234152 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-07-06 20:25:09.234163 | orchestrator | Sunday 06 July 2025 20:24:43 +0000 (0:00:02.143) 0:00:42.583 *********** 2025-07-06 20:25:09.234199 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:09.234219 | orchestrator | 2025-07-06 20:25:09.234239 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-06 20:25:09.234257 | orchestrator | Sunday 06 July 2025 20:24:56 +0000 (0:00:13.161) 0:00:55.745 *********** 2025-07-06 20:25:09.234273 | orchestrator | 2025-07-06 20:25:09.234284 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-06 20:25:09.234295 | orchestrator | Sunday 06 July 2025 20:24:56 +0000 (0:00:00.128) 0:00:55.874 *********** 2025-07-06 20:25:09.234306 | orchestrator | 2025-07-06 20:25:09.234333 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-06 20:25:09.234371 | orchestrator | Sunday 06 July 2025 20:24:56 +0000 (0:00:00.122) 0:00:55.996 *********** 2025-07-06 20:25:09.234389 | orchestrator | 2025-07-06 20:25:09.234407 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-07-06 20:25:09.234425 | orchestrator | Sunday 06 July 2025 20:24:57 +0000 (0:00:00.106) 0:00:56.103 *********** 2025-07-06 20:25:09.234442 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:09.234459 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:09.234476 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:09.234496 | orchestrator | 2025-07-06 20:25:09.234516 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:25:09.234535 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-06 20:25:09.234554 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:25:09.234574 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-06 20:25:09.234592 | orchestrator | 2025-07-06 20:25:09.234610 | orchestrator | 2025-07-06 20:25:09.234621 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:25:09.234632 | orchestrator | Sunday 06 July 2025 20:25:07 +0000 (0:00:10.634) 0:01:06.738 *********** 2025-07-06 20:25:09.234643 | orchestrator | =============================================================================== 2025-07-06 20:25:09.234653 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.16s 2025-07-06 20:25:09.234664 | orchestrator | placement : Restart placement-api container ---------------------------- 10.63s 2025-07-06 20:25:09.234675 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.16s 2025-07-06 20:25:09.234685 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.07s 2025-07-06 20:25:09.234696 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.62s 2025-07-06 20:25:09.234707 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.60s 2025-07-06 20:25:09.234717 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.97s 2025-07-06 20:25:09.234728 | orchestrator | service-ks-register : placement | Creating projects --------------------- 2.77s 2025-07-06 20:25:09.234738 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.35s 2025-07-06 20:25:09.234749 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.14s 2025-07-06 20:25:09.234760 | orchestrator | placement : Creating placement databases -------------------------------- 2.06s 2025-07-06 20:25:09.234770 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.54s 2025-07-06 20:25:09.234781 | orchestrator | placement : Check placement containers ---------------------------------- 1.51s 2025-07-06 20:25:09.234791 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.45s 2025-07-06 20:25:09.234802 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.31s 2025-07-06 20:25:09.234812 | orchestrator | placement : Copying over config.json files for services ----------------- 1.27s 2025-07-06 20:25:09.234823 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.88s 2025-07-06 20:25:09.234833 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.69s 2025-07-06 20:25:09.234844 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.67s 2025-07-06 20:25:09.234855 | orchestrator | placement : include_tasks ----------------------------------------------- 0.61s 2025-07-06 20:25:09.234866 | orchestrator | 2025-07-06 20:25:09 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:09.234877 | orchestrator | 2025-07-06 20:25:09 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:09.234897 | orchestrator | 2025-07-06 20:25:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:12.276544 | orchestrator | 2025-07-06 20:25:12 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:12.277273 | orchestrator | 2025-07-06 20:25:12 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:12.278529 | orchestrator | 2025-07-06 20:25:12 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:12.280096 | orchestrator | 2025-07-06 20:25:12 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:12.280118 | orchestrator | 2025-07-06 20:25:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:15.321015 | orchestrator | 2025-07-06 20:25:15 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:15.322534 | orchestrator | 2025-07-06 20:25:15 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:15.325543 | orchestrator | 2025-07-06 20:25:15 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:15.328244 | orchestrator | 2025-07-06 20:25:15 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:15.328283 | orchestrator | 2025-07-06 20:25:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:18.378977 | orchestrator | 2025-07-06 20:25:18 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:18.380705 | orchestrator | 2025-07-06 20:25:18 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:18.383040 | orchestrator | 2025-07-06 20:25:18 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:18.385580 | orchestrator | 2025-07-06 20:25:18 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:18.385820 | orchestrator | 2025-07-06 20:25:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:21.424870 | orchestrator | 2025-07-06 20:25:21 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:21.427001 | orchestrator | 2025-07-06 20:25:21 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:21.428718 | orchestrator | 2025-07-06 20:25:21 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:21.429705 | orchestrator | 2025-07-06 20:25:21 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:21.430131 | orchestrator | 2025-07-06 20:25:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:24.474119 | orchestrator | 2025-07-06 20:25:24 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:24.476033 | orchestrator | 2025-07-06 20:25:24 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:24.477791 | orchestrator | 2025-07-06 20:25:24 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:24.479387 | orchestrator | 2025-07-06 20:25:24 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:24.479611 | orchestrator | 2025-07-06 20:25:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:27.523842 | orchestrator | 2025-07-06 20:25:27 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:27.525735 | orchestrator | 2025-07-06 20:25:27 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:27.527427 | orchestrator | 2025-07-06 20:25:27 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:27.528716 | orchestrator | 2025-07-06 20:25:27 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:27.528737 | orchestrator | 2025-07-06 20:25:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:30.576840 | orchestrator | 2025-07-06 20:25:30 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:30.576900 | orchestrator | 2025-07-06 20:25:30 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:30.576913 | orchestrator | 2025-07-06 20:25:30 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:30.578783 | orchestrator | 2025-07-06 20:25:30 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:30.578806 | orchestrator | 2025-07-06 20:25:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:33.620967 | orchestrator | 2025-07-06 20:25:33 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:33.622481 | orchestrator | 2025-07-06 20:25:33 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:33.624441 | orchestrator | 2025-07-06 20:25:33 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:33.626339 | orchestrator | 2025-07-06 20:25:33 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:33.626377 | orchestrator | 2025-07-06 20:25:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:36.668160 | orchestrator | 2025-07-06 20:25:36 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:36.670227 | orchestrator | 2025-07-06 20:25:36 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:36.671879 | orchestrator | 2025-07-06 20:25:36 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:36.673616 | orchestrator | 2025-07-06 20:25:36 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:36.673651 | orchestrator | 2025-07-06 20:25:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:39.721049 | orchestrator | 2025-07-06 20:25:39 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state STARTED 2025-07-06 20:25:39.723756 | orchestrator | 2025-07-06 20:25:39 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:39.725542 | orchestrator | 2025-07-06 20:25:39 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:39.728449 | orchestrator | 2025-07-06 20:25:39 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:39.728805 | orchestrator | 2025-07-06 20:25:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:42.767027 | orchestrator | 2025-07-06 20:25:42 | INFO  | Task d81dc606-9df8-4696-aa10-20f01755f5b3 is in state SUCCESS 2025-07-06 20:25:42.768088 | orchestrator | 2025-07-06 20:25:42.768164 | orchestrator | 2025-07-06 20:25:42.768179 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:25:42.768192 | orchestrator | 2025-07-06 20:25:42.768267 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:25:42.768279 | orchestrator | Sunday 06 July 2025 20:20:48 +0000 (0:00:00.401) 0:00:00.401 *********** 2025-07-06 20:25:42.768316 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:42.768330 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:42.768341 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:42.768377 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:25:42.768388 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:25:42.768431 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:25:42.768457 | orchestrator | 2025-07-06 20:25:42.768470 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:25:42.768481 | orchestrator | Sunday 06 July 2025 20:20:49 +0000 (0:00:00.747) 0:00:01.149 *********** 2025-07-06 20:25:42.768491 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-07-06 20:25:42.768503 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-07-06 20:25:42.768513 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-07-06 20:25:42.768524 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-07-06 20:25:42.768534 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-07-06 20:25:42.768545 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-07-06 20:25:42.768556 | orchestrator | 2025-07-06 20:25:42.768566 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-07-06 20:25:42.768577 | orchestrator | 2025-07-06 20:25:42.768588 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-06 20:25:42.768599 | orchestrator | Sunday 06 July 2025 20:20:49 +0000 (0:00:00.546) 0:00:01.695 *********** 2025-07-06 20:25:42.768666 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:25:42.768680 | orchestrator | 2025-07-06 20:25:42.768757 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-07-06 20:25:42.768769 | orchestrator | Sunday 06 July 2025 20:20:51 +0000 (0:00:02.105) 0:00:03.801 *********** 2025-07-06 20:25:42.768827 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:42.768840 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:42.768852 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:42.768865 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:25:42.768876 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:25:42.768889 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:25:42.768901 | orchestrator | 2025-07-06 20:25:42.768913 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-07-06 20:25:42.768926 | orchestrator | Sunday 06 July 2025 20:20:53 +0000 (0:00:01.329) 0:00:05.130 *********** 2025-07-06 20:25:42.768938 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:42.768950 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:42.768962 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:42.768975 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:25:42.768987 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:25:42.768999 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:25:42.769012 | orchestrator | 2025-07-06 20:25:42.769025 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-07-06 20:25:42.769037 | orchestrator | Sunday 06 July 2025 20:20:54 +0000 (0:00:01.166) 0:00:06.297 *********** 2025-07-06 20:25:42.769049 | orchestrator | ok: [testbed-node-0] => { 2025-07-06 20:25:42.769060 | orchestrator |  "changed": false, 2025-07-06 20:25:42.769071 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:25:42.769082 | orchestrator | } 2025-07-06 20:25:42.769094 | orchestrator | ok: [testbed-node-1] => { 2025-07-06 20:25:42.769105 | orchestrator |  "changed": false, 2025-07-06 20:25:42.769115 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:25:42.769126 | orchestrator | } 2025-07-06 20:25:42.769137 | orchestrator | ok: [testbed-node-2] => { 2025-07-06 20:25:42.769148 | orchestrator |  "changed": false, 2025-07-06 20:25:42.769159 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:25:42.769170 | orchestrator | } 2025-07-06 20:25:42.769220 | orchestrator | ok: [testbed-node-3] => { 2025-07-06 20:25:42.769234 | orchestrator |  "changed": false, 2025-07-06 20:25:42.769244 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:25:42.769255 | orchestrator | } 2025-07-06 20:25:42.769265 | orchestrator | ok: [testbed-node-4] => { 2025-07-06 20:25:42.769298 | orchestrator |  "changed": false, 2025-07-06 20:25:42.769309 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:25:42.769320 | orchestrator | } 2025-07-06 20:25:42.769331 | orchestrator | ok: [testbed-node-5] => { 2025-07-06 20:25:42.769341 | orchestrator |  "changed": false, 2025-07-06 20:25:42.769352 | orchestrator |  "msg": "All assertions passed" 2025-07-06 20:25:42.769362 | orchestrator | } 2025-07-06 20:25:42.769373 | orchestrator | 2025-07-06 20:25:42.769384 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-07-06 20:25:42.769394 | orchestrator | Sunday 06 July 2025 20:20:55 +0000 (0:00:00.729) 0:00:07.027 *********** 2025-07-06 20:25:42.769406 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.769417 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.769427 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.769438 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.769448 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.769459 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.769469 | orchestrator | 2025-07-06 20:25:42.769480 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-07-06 20:25:42.769490 | orchestrator | Sunday 06 July 2025 20:20:56 +0000 (0:00:01.218) 0:00:08.245 *********** 2025-07-06 20:25:42.769501 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-07-06 20:25:42.769511 | orchestrator | 2025-07-06 20:25:42.769522 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-07-06 20:25:42.769532 | orchestrator | Sunday 06 July 2025 20:20:59 +0000 (0:00:03.214) 0:00:11.460 *********** 2025-07-06 20:25:42.769543 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-07-06 20:25:42.769555 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-07-06 20:25:42.769566 | orchestrator | 2025-07-06 20:25:42.769589 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-07-06 20:25:42.769600 | orchestrator | Sunday 06 July 2025 20:21:06 +0000 (0:00:07.066) 0:00:18.526 *********** 2025-07-06 20:25:42.769611 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:25:42.769622 | orchestrator | 2025-07-06 20:25:42.769632 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-07-06 20:25:42.769643 | orchestrator | Sunday 06 July 2025 20:21:09 +0000 (0:00:03.128) 0:00:21.654 *********** 2025-07-06 20:25:42.769654 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:25:42.769664 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-07-06 20:25:42.769675 | orchestrator | 2025-07-06 20:25:42.769685 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-07-06 20:25:42.769712 | orchestrator | Sunday 06 July 2025 20:21:13 +0000 (0:00:03.539) 0:00:25.194 *********** 2025-07-06 20:25:42.769723 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:25:42.769745 | orchestrator | 2025-07-06 20:25:42.769756 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-07-06 20:25:42.769767 | orchestrator | Sunday 06 July 2025 20:21:16 +0000 (0:00:03.112) 0:00:28.307 *********** 2025-07-06 20:25:42.769777 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-07-06 20:25:42.769788 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-07-06 20:25:42.769798 | orchestrator | 2025-07-06 20:25:42.769809 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-06 20:25:42.769820 | orchestrator | Sunday 06 July 2025 20:21:23 +0000 (0:00:07.369) 0:00:35.676 *********** 2025-07-06 20:25:42.769830 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.769841 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.769852 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.769863 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.769873 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.769884 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.769902 | orchestrator | 2025-07-06 20:25:42.769913 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-07-06 20:25:42.769923 | orchestrator | Sunday 06 July 2025 20:21:24 +0000 (0:00:00.604) 0:00:36.281 *********** 2025-07-06 20:25:42.769934 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.769945 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.769955 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.769966 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.769976 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.769987 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.769997 | orchestrator | 2025-07-06 20:25:42.770008 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-07-06 20:25:42.770070 | orchestrator | Sunday 06 July 2025 20:21:27 +0000 (0:00:02.641) 0:00:38.923 *********** 2025-07-06 20:25:42.770081 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:42.770092 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:42.770103 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:42.770114 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:25:42.770125 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:25:42.770135 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:25:42.770146 | orchestrator | 2025-07-06 20:25:42.770157 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-06 20:25:42.770168 | orchestrator | Sunday 06 July 2025 20:21:29 +0000 (0:00:02.152) 0:00:41.075 *********** 2025-07-06 20:25:42.770178 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.770189 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.770225 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.770237 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.770247 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.770258 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.770268 | orchestrator | 2025-07-06 20:25:42.770279 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-07-06 20:25:42.770290 | orchestrator | Sunday 06 July 2025 20:21:32 +0000 (0:00:03.783) 0:00:44.859 *********** 2025-07-06 20:25:42.770311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.770338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.770351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.770371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.770388 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.770400 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.770411 | orchestrator | 2025-07-06 20:25:42.770422 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-07-06 20:25:42.770433 | orchestrator | Sunday 06 July 2025 20:21:36 +0000 (0:00:03.733) 0:00:48.592 *********** 2025-07-06 20:25:42.770445 | orchestrator | [WARNING]: Skipped 2025-07-06 20:25:42.770477 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-07-06 20:25:42.770489 | orchestrator | due to this access issue: 2025-07-06 20:25:42.770500 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-07-06 20:25:42.770511 | orchestrator | a directory 2025-07-06 20:25:42.770522 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:25:42.770532 | orchestrator | 2025-07-06 20:25:42.770544 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-06 20:25:42.770561 | orchestrator | Sunday 06 July 2025 20:21:38 +0000 (0:00:01.528) 0:00:50.121 *********** 2025-07-06 20:25:42.770580 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:25:42.770592 | orchestrator | 2025-07-06 20:25:42.770603 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-07-06 20:25:42.770614 | orchestrator | Sunday 06 July 2025 20:21:39 +0000 (0:00:01.664) 0:00:51.786 *********** 2025-07-06 20:25:42.770625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.770637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.770653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.770665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.770685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.770703 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.770714 | orchestrator | 2025-07-06 20:25:42.770726 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-07-06 20:25:42.770737 | orchestrator | Sunday 06 July 2025 20:21:43 +0000 (0:00:03.969) 0:00:55.755 *********** 2025-07-06 20:25:42.770748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.770759 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.770775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.770787 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.770799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.770816 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.770835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.770846 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.770858 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.770869 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.770880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.770891 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.770902 | orchestrator | 2025-07-06 20:25:42.770913 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-07-06 20:25:42.770924 | orchestrator | Sunday 06 July 2025 20:21:47 +0000 (0:00:03.139) 0:00:58.895 *********** 2025-07-06 20:25:42.770940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.770957 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.770986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.770997 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.771009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.771020 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.771031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.771042 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.771064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.771075 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.771086 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.771103 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.771114 | orchestrator | 2025-07-06 20:25:42.771125 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-07-06 20:25:42.771136 | orchestrator | Sunday 06 July 2025 20:21:51 +0000 (0:00:04.157) 0:01:03.052 *********** 2025-07-06 20:25:42.771147 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.771157 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.771168 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.771179 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.771189 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.771305 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.771317 | orchestrator | 2025-07-06 20:25:42.771328 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-07-06 20:25:42.771347 | orchestrator | Sunday 06 July 2025 20:21:54 +0000 (0:00:03.219) 0:01:06.272 *********** 2025-07-06 20:25:42.771359 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.771380 | orchestrator | 2025-07-06 20:25:42.771392 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-07-06 20:25:42.771403 | orchestrator | Sunday 06 July 2025 20:21:54 +0000 (0:00:00.124) 0:01:06.397 *********** 2025-07-06 20:25:42.771413 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.771424 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.771444 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.771455 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.771466 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.771477 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.771487 | orchestrator | 2025-07-06 20:25:42.771498 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-07-06 20:25:42.771509 | orchestrator | Sunday 06 July 2025 20:21:55 +0000 (0:00:00.787) 0:01:07.184 *********** 2025-07-06 20:25:42.771520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.771530 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.771539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.771557 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.771571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.771582 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.771597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.771608 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.771618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.771628 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.771637 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.771647 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.771656 | orchestrator | 2025-07-06 20:25:42.771666 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-07-06 20:25:42.771682 | orchestrator | Sunday 06 July 2025 20:21:58 +0000 (0:00:03.376) 0:01:10.561 *********** 2025-07-06 20:25:42.771696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.771707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.771725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.771735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.771746 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.771762 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.771772 | orchestrator | 2025-07-06 20:25:42.771786 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-07-06 20:25:42.771796 | orchestrator | Sunday 06 July 2025 20:22:02 +0000 (0:00:04.140) 0:01:14.701 *********** 2025-07-06 20:25:42.771806 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.771822 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.771832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.771842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.771861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.771872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.771881 | orchestrator | 2025-07-06 20:25:42.771891 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-07-06 20:25:42.771901 | orchestrator | Sunday 06 July 2025 20:22:10 +0000 (0:00:07.714) 0:01:22.415 *********** 2025-07-06 20:25:42.771918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.771929 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.771939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.771954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.771969 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.771979 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.771989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.771998 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.772016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.772026 | orchestrator | 2025-07-06 20:25:42.772036 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-07-06 20:25:42.772046 | orchestrator | Sunday 06 July 2025 20:22:14 +0000 (0:00:03.631) 0:01:26.047 *********** 2025-07-06 20:25:42.772055 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:42.772070 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:42.772080 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:42.772089 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.772098 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.772108 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.772117 | orchestrator | 2025-07-06 20:25:42.772127 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-07-06 20:25:42.772136 | orchestrator | Sunday 06 July 2025 20:22:17 +0000 (0:00:03.577) 0:01:29.625 *********** 2025-07-06 20:25:42.772146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.772156 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.772170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.772181 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.772190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.772224 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.772242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.772253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.772269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.772279 | orchestrator | 2025-07-06 20:25:42.772288 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-07-06 20:25:42.772298 | orchestrator | Sunday 06 July 2025 20:22:23 +0000 (0:00:05.541) 0:01:35.166 *********** 2025-07-06 20:25:42.772308 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.772317 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.772327 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.772336 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.772345 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.772355 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.772364 | orchestrator | 2025-07-06 20:25:42.772374 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-07-06 20:25:42.772384 | orchestrator | Sunday 06 July 2025 20:22:25 +0000 (0:00:02.679) 0:01:37.846 *********** 2025-07-06 20:25:42.772401 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.772411 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.772420 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.772430 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.772439 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.772448 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.772458 | orchestrator | 2025-07-06 20:25:42.772467 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-07-06 20:25:42.772476 | orchestrator | Sunday 06 July 2025 20:22:29 +0000 (0:00:03.175) 0:01:41.022 *********** 2025-07-06 20:25:42.772486 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.772495 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.772504 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.772514 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.772523 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.772533 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.772542 | orchestrator | 2025-07-06 20:25:42.772552 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-07-06 20:25:42.772561 | orchestrator | Sunday 06 July 2025 20:22:31 +0000 (0:00:02.097) 0:01:43.119 *********** 2025-07-06 20:25:42.772571 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.772580 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.772596 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.772605 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.772615 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.772624 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.772634 | orchestrator | 2025-07-06 20:25:42.772643 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-07-06 20:25:42.772653 | orchestrator | Sunday 06 July 2025 20:22:33 +0000 (0:00:02.310) 0:01:45.430 *********** 2025-07-06 20:25:42.772662 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.772671 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.772681 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.772690 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.772700 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.772709 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.772718 | orchestrator | 2025-07-06 20:25:42.772733 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-07-06 20:25:42.772743 | orchestrator | Sunday 06 July 2025 20:22:35 +0000 (0:00:02.182) 0:01:47.613 *********** 2025-07-06 20:25:42.772752 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.772762 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.772771 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.772780 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.772790 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.772799 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.772808 | orchestrator | 2025-07-06 20:25:42.772882 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-07-06 20:25:42.772895 | orchestrator | Sunday 06 July 2025 20:22:38 +0000 (0:00:02.624) 0:01:50.237 *********** 2025-07-06 20:25:42.772904 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:25:42.772914 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.772924 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:25:42.772933 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.772943 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:25:42.772952 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.772962 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:25:42.772972 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.772981 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:25:42.772991 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.773000 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-06 20:25:42.773010 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.773019 | orchestrator | 2025-07-06 20:25:42.773029 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-07-06 20:25:42.773038 | orchestrator | Sunday 06 July 2025 20:22:42 +0000 (0:00:03.836) 0:01:54.073 *********** 2025-07-06 20:25:42.773049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.773066 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.773081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.773091 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.773101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.773111 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.773128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.773138 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.773148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.773158 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.773168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.773183 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.773218 | orchestrator | 2025-07-06 20:25:42.773231 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-07-06 20:25:42.773245 | orchestrator | Sunday 06 July 2025 20:22:44 +0000 (0:00:02.365) 0:01:56.439 *********** 2025-07-06 20:25:42.773255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.773265 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.773282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.773292 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.773302 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.773312 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.773322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.773339 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.773352 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.773363 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.773373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.773383 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.773392 | orchestrator | 2025-07-06 20:25:42.773402 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-07-06 20:25:42.773412 | orchestrator | Sunday 06 July 2025 20:22:47 +0000 (0:00:02.464) 0:01:58.904 *********** 2025-07-06 20:25:42.773421 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.773431 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.773441 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.773450 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.773460 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.773475 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.773485 | orchestrator | 2025-07-06 20:25:42.773495 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-07-06 20:25:42.773504 | orchestrator | Sunday 06 July 2025 20:22:52 +0000 (0:00:05.813) 0:02:04.717 *********** 2025-07-06 20:25:42.773514 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.773523 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.773533 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.773542 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:42.773552 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:42.773562 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:42.773571 | orchestrator | 2025-07-06 20:25:42.773581 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-07-06 20:25:42.773591 | orchestrator | Sunday 06 July 2025 20:22:58 +0000 (0:00:05.667) 0:02:10.385 *********** 2025-07-06 20:25:42.773601 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.773610 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.773620 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.773629 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.773639 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.773661 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.773671 | orchestrator | 2025-07-06 20:25:42.773680 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-07-06 20:25:42.773690 | orchestrator | Sunday 06 July 2025 20:23:00 +0000 (0:00:02.443) 0:02:12.829 *********** 2025-07-06 20:25:42.773700 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.773709 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.773719 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.773728 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.773738 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.773747 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.773757 | orchestrator | 2025-07-06 20:25:42.773767 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-07-06 20:25:42.773776 | orchestrator | Sunday 06 July 2025 20:23:02 +0000 (0:00:01.821) 0:02:14.651 *********** 2025-07-06 20:25:42.773786 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.773796 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.773805 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.773815 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.773824 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.773834 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.773843 | orchestrator | 2025-07-06 20:25:42.773853 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-07-06 20:25:42.773862 | orchestrator | Sunday 06 July 2025 20:23:04 +0000 (0:00:01.945) 0:02:16.597 *********** 2025-07-06 20:25:42.773872 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.773882 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.773891 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.773901 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.773910 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.773920 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.773929 | orchestrator | 2025-07-06 20:25:42.773939 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-07-06 20:25:42.773949 | orchestrator | Sunday 06 July 2025 20:23:07 +0000 (0:00:02.973) 0:02:19.570 *********** 2025-07-06 20:25:42.773958 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.773968 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.773977 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.773987 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.773996 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.774006 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.774053 | orchestrator | 2025-07-06 20:25:42.774065 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-07-06 20:25:42.774075 | orchestrator | Sunday 06 July 2025 20:23:10 +0000 (0:00:03.102) 0:02:22.672 *********** 2025-07-06 20:25:42.774084 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.774094 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.774108 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.774117 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.774127 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.774136 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.774146 | orchestrator | 2025-07-06 20:25:42.774156 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-07-06 20:25:42.774166 | orchestrator | Sunday 06 July 2025 20:23:12 +0000 (0:00:01.810) 0:02:24.483 *********** 2025-07-06 20:25:42.774175 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.774185 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.774216 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.774227 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.774237 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.774246 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.774256 | orchestrator | 2025-07-06 20:25:42.774266 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-07-06 20:25:42.774282 | orchestrator | Sunday 06 July 2025 20:23:15 +0000 (0:00:02.920) 0:02:27.404 *********** 2025-07-06 20:25:42.774292 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.774301 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.774311 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.774320 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.774330 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.774339 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.774348 | orchestrator | 2025-07-06 20:25:42.774362 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-07-06 20:25:42.774372 | orchestrator | Sunday 06 July 2025 20:23:17 +0000 (0:00:02.253) 0:02:29.657 *********** 2025-07-06 20:25:42.774382 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:25:42.774392 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.774402 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:25:42.774411 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.774427 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:25:42.774437 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.774446 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:25:42.774456 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.774466 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:25:42.774475 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.774485 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-06 20:25:42.774495 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.774504 | orchestrator | 2025-07-06 20:25:42.774514 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-07-06 20:25:42.774524 | orchestrator | Sunday 06 July 2025 20:23:21 +0000 (0:00:03.541) 0:02:33.198 *********** 2025-07-06 20:25:42.774534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.774544 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.774555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.774570 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.774584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-06 20:25:42.774594 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.774610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.774620 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.774630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.774640 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.774650 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-06 20:25:42.774660 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.774670 | orchestrator | 2025-07-06 20:25:42.774679 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-07-06 20:25:42.774689 | orchestrator | Sunday 06 July 2025 20:23:23 +0000 (0:00:02.135) 0:02:35.334 *********** 2025-07-06 20:25:42.774703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.774720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.774737 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.774748 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.774758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-06 20:25:42.774782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-06 20:25:42.774792 | orchestrator | 2025-07-06 20:25:42.774802 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-06 20:25:42.774812 | orchestrator | Sunday 06 July 2025 20:23:26 +0000 (0:00:02.560) 0:02:37.895 *********** 2025-07-06 20:25:42.774822 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:42.774831 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:42.774841 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:42.774850 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:25:42.774860 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:25:42.774869 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:25:42.774879 | orchestrator | 2025-07-06 20:25:42.774888 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-07-06 20:25:42.774898 | orchestrator | Sunday 06 July 2025 20:23:27 +0000 (0:00:01.439) 0:02:39.335 *********** 2025-07-06 20:25:42.774908 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:42.774917 | orchestrator | 2025-07-06 20:25:42.774927 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-07-06 20:25:42.774936 | orchestrator | Sunday 06 July 2025 20:23:29 +0000 (0:00:02.413) 0:02:41.748 *********** 2025-07-06 20:25:42.774946 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:42.774955 | orchestrator | 2025-07-06 20:25:42.774965 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-07-06 20:25:42.774974 | orchestrator | Sunday 06 July 2025 20:23:32 +0000 (0:00:02.168) 0:02:43.916 *********** 2025-07-06 20:25:42.774984 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:42.774993 | orchestrator | 2025-07-06 20:25:42.775003 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:25:42.775012 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:40.233) 0:03:24.149 *********** 2025-07-06 20:25:42.775022 | orchestrator | 2025-07-06 20:25:42.775031 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:25:42.775041 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:00.068) 0:03:24.218 *********** 2025-07-06 20:25:42.775051 | orchestrator | 2025-07-06 20:25:42.775060 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:25:42.775075 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:00.257) 0:03:24.475 *********** 2025-07-06 20:25:42.775085 | orchestrator | 2025-07-06 20:25:42.775095 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:25:42.775105 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:00.064) 0:03:24.540 *********** 2025-07-06 20:25:42.775114 | orchestrator | 2025-07-06 20:25:42.775124 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:25:42.775133 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:00.063) 0:03:24.603 *********** 2025-07-06 20:25:42.775143 | orchestrator | 2025-07-06 20:25:42.775152 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-06 20:25:42.775162 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:00.064) 0:03:24.667 *********** 2025-07-06 20:25:42.775172 | orchestrator | 2025-07-06 20:25:42.775181 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-07-06 20:25:42.775191 | orchestrator | Sunday 06 July 2025 20:24:12 +0000 (0:00:00.065) 0:03:24.732 *********** 2025-07-06 20:25:42.775248 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:42.775265 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:42.775281 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:42.775291 | orchestrator | 2025-07-06 20:25:42.775301 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-07-06 20:25:42.775311 | orchestrator | Sunday 06 July 2025 20:24:41 +0000 (0:00:28.521) 0:03:53.254 *********** 2025-07-06 20:25:42.775320 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:25:42.775330 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:25:42.775339 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:25:42.775349 | orchestrator | 2025-07-06 20:25:42.775358 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:25:42.775369 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-06 20:25:42.775379 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-06 20:25:42.775389 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-06 20:25:42.775398 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-06 20:25:42.775408 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-06 20:25:42.775418 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-06 20:25:42.775428 | orchestrator | 2025-07-06 20:25:42.775437 | orchestrator | 2025-07-06 20:25:42.775447 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:25:42.775456 | orchestrator | Sunday 06 July 2025 20:25:39 +0000 (0:00:57.984) 0:04:51.238 *********** 2025-07-06 20:25:42.775466 | orchestrator | =============================================================================== 2025-07-06 20:25:42.775475 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 57.98s 2025-07-06 20:25:42.775485 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.23s 2025-07-06 20:25:42.775494 | orchestrator | neutron : Restart neutron-server container ----------------------------- 28.52s 2025-07-06 20:25:42.775509 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.71s 2025-07-06 20:25:42.775518 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.37s 2025-07-06 20:25:42.775528 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.07s 2025-07-06 20:25:42.775537 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 5.81s 2025-07-06 20:25:42.775547 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.67s 2025-07-06 20:25:42.775556 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.54s 2025-07-06 20:25:42.775566 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.16s 2025-07-06 20:25:42.775575 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.14s 2025-07-06 20:25:42.775584 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.97s 2025-07-06 20:25:42.775594 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 3.84s 2025-07-06 20:25:42.775603 | orchestrator | Setting sysctl values --------------------------------------------------- 3.78s 2025-07-06 20:25:42.775613 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.73s 2025-07-06 20:25:42.775622 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.63s 2025-07-06 20:25:42.775637 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.58s 2025-07-06 20:25:42.775647 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 3.54s 2025-07-06 20:25:42.775656 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.54s 2025-07-06 20:25:42.775666 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.38s 2025-07-06 20:25:42.775681 | orchestrator | 2025-07-06 20:25:42 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state STARTED 2025-07-06 20:25:42.775691 | orchestrator | 2025-07-06 20:25:42 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:25:42.775701 | orchestrator | 2025-07-06 20:25:42 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:42.775711 | orchestrator | 2025-07-06 20:25:42 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:42.775721 | orchestrator | 2025-07-06 20:25:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:45.805870 | orchestrator | 2025-07-06 20:25:45 | INFO  | Task 95e1cf0e-0fc5-491b-beed-5f39ae981521 is in state SUCCESS 2025-07-06 20:25:45.807439 | orchestrator | 2025-07-06 20:25:45 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:25:45.809134 | orchestrator | 2025-07-06 20:25:45 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:45.810855 | orchestrator | 2025-07-06 20:25:45 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:45.810890 | orchestrator | 2025-07-06 20:25:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:48.864433 | orchestrator | 2025-07-06 20:25:48 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:25:48.865806 | orchestrator | 2025-07-06 20:25:48 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:48.868022 | orchestrator | 2025-07-06 20:25:48 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:25:48.869544 | orchestrator | 2025-07-06 20:25:48 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:48.869742 | orchestrator | 2025-07-06 20:25:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:51.915175 | orchestrator | 2025-07-06 20:25:51 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:25:51.916879 | orchestrator | 2025-07-06 20:25:51 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:51.919032 | orchestrator | 2025-07-06 20:25:51 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:25:51.920456 | orchestrator | 2025-07-06 20:25:51 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:51.920532 | orchestrator | 2025-07-06 20:25:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:54.962872 | orchestrator | 2025-07-06 20:25:54 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:25:54.965565 | orchestrator | 2025-07-06 20:25:54 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:54.968503 | orchestrator | 2025-07-06 20:25:54 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:25:54.971397 | orchestrator | 2025-07-06 20:25:54 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state STARTED 2025-07-06 20:25:54.971483 | orchestrator | 2025-07-06 20:25:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:58.018483 | orchestrator | 2025-07-06 20:25:58 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:25:58.019682 | orchestrator | 2025-07-06 20:25:58 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:25:58.021148 | orchestrator | 2025-07-06 20:25:58 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:25:58.024812 | orchestrator | 2025-07-06 20:25:58.024875 | orchestrator | 2025-07-06 20:25:58.024898 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:25:58.024920 | orchestrator | 2025-07-06 20:25:58.024935 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:25:58.024947 | orchestrator | Sunday 06 July 2025 20:25:12 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-07-06 20:25:58.024958 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:58.024971 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:58.024982 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:58.024993 | orchestrator | ok: [testbed-manager] 2025-07-06 20:25:58.025003 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:25:58.025014 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:25:58.025024 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:25:58.025035 | orchestrator | 2025-07-06 20:25:58.025046 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:25:58.025057 | orchestrator | Sunday 06 July 2025 20:25:12 +0000 (0:00:00.782) 0:00:01.059 *********** 2025-07-06 20:25:58.025068 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-07-06 20:25:58.025079 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-07-06 20:25:58.025089 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-07-06 20:25:58.025101 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-07-06 20:25:58.025111 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-07-06 20:25:58.025122 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-07-06 20:25:58.025132 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-07-06 20:25:58.025143 | orchestrator | 2025-07-06 20:25:58.025154 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-06 20:25:58.025164 | orchestrator | 2025-07-06 20:25:58.025175 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-07-06 20:25:58.025186 | orchestrator | Sunday 06 July 2025 20:25:13 +0000 (0:00:00.705) 0:00:01.764 *********** 2025-07-06 20:25:58.025198 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:25:58.025284 | orchestrator | 2025-07-06 20:25:58.025305 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-07-06 20:25:58.025325 | orchestrator | Sunday 06 July 2025 20:25:15 +0000 (0:00:01.517) 0:00:03.282 *********** 2025-07-06 20:25:58.025343 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-07-06 20:25:58.025362 | orchestrator | 2025-07-06 20:25:58.025383 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-07-06 20:25:58.025402 | orchestrator | Sunday 06 July 2025 20:25:18 +0000 (0:00:03.279) 0:00:06.562 *********** 2025-07-06 20:25:58.025423 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-07-06 20:25:58.025448 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-07-06 20:25:58.025468 | orchestrator | 2025-07-06 20:25:58.025486 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-07-06 20:25:58.025499 | orchestrator | Sunday 06 July 2025 20:25:24 +0000 (0:00:06.401) 0:00:12.963 *********** 2025-07-06 20:25:58.025512 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:25:58.025524 | orchestrator | 2025-07-06 20:25:58.025537 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-07-06 20:25:58.025571 | orchestrator | Sunday 06 July 2025 20:25:27 +0000 (0:00:03.185) 0:00:16.149 *********** 2025-07-06 20:25:58.025585 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:25:58.025598 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-07-06 20:25:58.025611 | orchestrator | 2025-07-06 20:25:58.025622 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-07-06 20:25:58.025632 | orchestrator | Sunday 06 July 2025 20:25:31 +0000 (0:00:03.987) 0:00:20.137 *********** 2025-07-06 20:25:58.025643 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:25:58.025654 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-07-06 20:25:58.025665 | orchestrator | 2025-07-06 20:25:58.025676 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-07-06 20:25:58.025687 | orchestrator | Sunday 06 July 2025 20:25:38 +0000 (0:00:06.647) 0:00:26.784 *********** 2025-07-06 20:25:58.025697 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-07-06 20:25:58.025708 | orchestrator | 2025-07-06 20:25:58.025722 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:25:58.025740 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:58.025759 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:58.025795 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:58.025814 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:58.025831 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:58.025871 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:58.025890 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:25:58.025906 | orchestrator | 2025-07-06 20:25:58.025921 | orchestrator | 2025-07-06 20:25:58.025939 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:25:58.025957 | orchestrator | Sunday 06 July 2025 20:25:43 +0000 (0:00:05.365) 0:00:32.149 *********** 2025-07-06 20:25:58.025974 | orchestrator | =============================================================================== 2025-07-06 20:25:58.025992 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.65s 2025-07-06 20:25:58.026010 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.40s 2025-07-06 20:25:58.026096 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.37s 2025-07-06 20:25:58.026257 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.99s 2025-07-06 20:25:58.026277 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.28s 2025-07-06 20:25:58.026296 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.19s 2025-07-06 20:25:58.026313 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.52s 2025-07-06 20:25:58.026332 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.78s 2025-07-06 20:25:58.026344 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-07-06 20:25:58.026355 | orchestrator | 2025-07-06 20:25:58.026366 | orchestrator | 2025-07-06 20:25:58.026377 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:25:58.026388 | orchestrator | 2025-07-06 20:25:58.026399 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:25:58.026427 | orchestrator | Sunday 06 July 2025 20:24:13 +0000 (0:00:00.353) 0:00:00.353 *********** 2025-07-06 20:25:58.026438 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:25:58.026449 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:25:58.026460 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:25:58.026471 | orchestrator | 2025-07-06 20:25:58.026482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:25:58.026494 | orchestrator | Sunday 06 July 2025 20:24:14 +0000 (0:00:00.303) 0:00:00.657 *********** 2025-07-06 20:25:58.026504 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-07-06 20:25:58.026516 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-07-06 20:25:58.026527 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-07-06 20:25:58.026537 | orchestrator | 2025-07-06 20:25:58.026548 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-07-06 20:25:58.026559 | orchestrator | 2025-07-06 20:25:58.026569 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-06 20:25:58.026580 | orchestrator | Sunday 06 July 2025 20:24:14 +0000 (0:00:00.565) 0:00:01.222 *********** 2025-07-06 20:25:58.026591 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:58.026603 | orchestrator | 2025-07-06 20:25:58.026622 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-07-06 20:25:58.026640 | orchestrator | Sunday 06 July 2025 20:24:15 +0000 (0:00:00.537) 0:00:01.759 *********** 2025-07-06 20:25:58.026660 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-07-06 20:25:58.026677 | orchestrator | 2025-07-06 20:25:58.026697 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-07-06 20:25:58.026715 | orchestrator | Sunday 06 July 2025 20:24:18 +0000 (0:00:03.019) 0:00:04.779 *********** 2025-07-06 20:25:58.026735 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-07-06 20:25:58.026754 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-07-06 20:25:58.026772 | orchestrator | 2025-07-06 20:25:58.026792 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-07-06 20:25:58.026811 | orchestrator | Sunday 06 July 2025 20:24:24 +0000 (0:00:05.866) 0:00:10.645 *********** 2025-07-06 20:25:58.026824 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:25:58.026835 | orchestrator | 2025-07-06 20:25:58.026846 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-07-06 20:25:58.026856 | orchestrator | Sunday 06 July 2025 20:24:27 +0000 (0:00:03.184) 0:00:13.829 *********** 2025-07-06 20:25:58.026867 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:25:58.026878 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-07-06 20:25:58.026889 | orchestrator | 2025-07-06 20:25:58.026899 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-07-06 20:25:58.026910 | orchestrator | Sunday 06 July 2025 20:24:31 +0000 (0:00:03.774) 0:00:17.604 *********** 2025-07-06 20:25:58.026921 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:25:58.026931 | orchestrator | 2025-07-06 20:25:58.026942 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-07-06 20:25:58.026962 | orchestrator | Sunday 06 July 2025 20:24:34 +0000 (0:00:03.322) 0:00:20.926 *********** 2025-07-06 20:25:58.026973 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-07-06 20:25:58.026984 | orchestrator | 2025-07-06 20:25:58.026994 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-07-06 20:25:58.027005 | orchestrator | Sunday 06 July 2025 20:24:38 +0000 (0:00:04.126) 0:00:25.052 *********** 2025-07-06 20:25:58.027016 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:58.027027 | orchestrator | 2025-07-06 20:25:58.027037 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-07-06 20:25:58.027069 | orchestrator | Sunday 06 July 2025 20:24:41 +0000 (0:00:03.147) 0:00:28.200 *********** 2025-07-06 20:25:58.027081 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:58.027092 | orchestrator | 2025-07-06 20:25:58.027103 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-07-06 20:25:58.027114 | orchestrator | Sunday 06 July 2025 20:24:45 +0000 (0:00:03.929) 0:00:32.129 *********** 2025-07-06 20:25:58.027124 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:58.027135 | orchestrator | 2025-07-06 20:25:58.027146 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-07-06 20:25:58.027157 | orchestrator | Sunday 06 July 2025 20:24:49 +0000 (0:00:03.517) 0:00:35.647 *********** 2025-07-06 20:25:58.027173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.027189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.027200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.027273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.027305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.027318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.027330 | orchestrator | 2025-07-06 20:25:58.027341 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-07-06 20:25:58.027352 | orchestrator | Sunday 06 July 2025 20:24:50 +0000 (0:00:01.503) 0:00:37.151 *********** 2025-07-06 20:25:58.027363 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:58.027374 | orchestrator | 2025-07-06 20:25:58.027385 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-07-06 20:25:58.027396 | orchestrator | Sunday 06 July 2025 20:24:50 +0000 (0:00:00.152) 0:00:37.303 *********** 2025-07-06 20:25:58.027407 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:58.027417 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:58.027428 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:58.027439 | orchestrator | 2025-07-06 20:25:58.027450 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-07-06 20:25:58.027461 | orchestrator | Sunday 06 July 2025 20:24:51 +0000 (0:00:00.496) 0:00:37.799 *********** 2025-07-06 20:25:58.027472 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:25:58.027489 | orchestrator | 2025-07-06 20:25:58.027508 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-07-06 20:25:58.027527 | orchestrator | Sunday 06 July 2025 20:24:52 +0000 (0:00:00.946) 0:00:38.746 *********** 2025-07-06 20:25:58.027545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.027571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.027603 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:58.027637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.027660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.027672 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:58.027684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.027695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.027722 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:58.027733 | orchestrator | 2025-07-06 20:25:58.027744 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-07-06 20:25:58.027755 | orchestrator | Sunday 06 July 2025 20:24:52 +0000 (0:00:00.672) 0:00:39.418 *********** 2025-07-06 20:25:58.027765 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:58.027776 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:58.027787 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:58.027797 | orchestrator | 2025-07-06 20:25:58.027808 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-06 20:25:58.027819 | orchestrator | Sunday 06 July 2025 20:24:53 +0000 (0:00:00.323) 0:00:39.742 *********** 2025-07-06 20:25:58.027830 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:25:58.027840 | orchestrator | 2025-07-06 20:25:58.027857 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-07-06 20:25:58.027869 | orchestrator | Sunday 06 July 2025 20:24:53 +0000 (0:00:00.734) 0:00:40.477 *********** 2025-07-06 20:25:58.027888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.028104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.028118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.028140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.028158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.028181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.028193 | orchestrator | 2025-07-06 20:25:58.028325 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-07-06 20:25:58.028342 | orchestrator | Sunday 06 July 2025 20:24:56 +0000 (0:00:02.518) 0:00:42.995 *********** 2025-07-06 20:25:58.028355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.028368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.028408 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:58.028420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.028438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.028460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.028472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.028484 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:58.028495 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:58.028507 | orchestrator | 2025-07-06 20:25:58.028518 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-07-06 20:25:58.028530 | orchestrator | Sunday 06 July 2025 20:24:57 +0000 (0:00:00.636) 0:00:43.632 *********** 2025-07-06 20:25:58.028542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.028561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.028585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': '2025-07-06 20:25:58 | INFO  | Task 0caf5505-e854-4ca9-9ef0-f66f3acb5444 is in state SUCCESS 2025-07-06 20:25:58.028599 | orchestrator | 2025-07-06 20:25:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:25:58.028610 | orchestrator | http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.028622 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:58.028634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.028646 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:58.028657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.028676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.028688 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:58.028699 | orchestrator | 2025-07-06 20:25:58.028711 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-07-06 20:25:58.028722 | orchestrator | Sunday 06 July 2025 20:24:58 +0000 (0:00:01.570) 0:00:45.202 *********** 2025-07-06 20:25:58.028734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.028756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.028768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.028786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.028836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.028854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.028866 | orchestrator | 2025-07-06 20:25:58.028878 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-07-06 20:25:58.028890 | orchestrator | Sunday 06 July 2025 20:25:01 +0000 (0:00:02.364) 0:00:47.566 *********** 2025-07-06 20:25:58.028909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.028921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.028941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.028955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.028973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.028996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.029010 | orchestrator | 2025-07-06 20:25:58.029024 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-07-06 20:25:58.029037 | orchestrator | Sunday 06 July 2025 20:25:05 +0000 (0:00:04.773) 0:00:52.339 *********** 2025-07-06 20:25:58.029051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.029071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.029085 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:58.029099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.029124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.029138 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:58.029152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-06 20:25:58.029171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:25:58.029185 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:58.029198 | orchestrator | 2025-07-06 20:25:58.029235 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-07-06 20:25:58.029249 | orchestrator | Sunday 06 July 2025 20:25:06 +0000 (0:00:00.805) 0:00:53.145 *********** 2025-07-06 20:25:58.029263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.029282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.029304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-06 20:25:58.029324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.029336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.029348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:25:58.029360 | orchestrator | 2025-07-06 20:25:58.029372 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-06 20:25:58.029383 | orchestrator | Sunday 06 July 2025 20:25:08 +0000 (0:00:02.011) 0:00:55.156 *********** 2025-07-06 20:25:58.029395 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:25:58.029407 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:25:58.029418 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:25:58.029430 | orchestrator | 2025-07-06 20:25:58.029442 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-07-06 20:25:58.029453 | orchestrator | Sunday 06 July 2025 20:25:08 +0000 (0:00:00.291) 0:00:55.448 *********** 2025-07-06 20:25:58.029464 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:58.029476 | orchestrator | 2025-07-06 20:25:58.029487 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-07-06 20:25:58.029499 | orchestrator | Sunday 06 July 2025 20:25:11 +0000 (0:00:02.298) 0:00:57.746 *********** 2025-07-06 20:25:58.029510 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:58.029522 | orchestrator | 2025-07-06 20:25:58.029533 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-07-06 20:25:58.029545 | orchestrator | Sunday 06 July 2025 20:25:13 +0000 (0:00:02.244) 0:00:59.991 *********** 2025-07-06 20:25:58.029556 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:58.029568 | orchestrator | 2025-07-06 20:25:58.029587 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-06 20:25:58.029599 | orchestrator | Sunday 06 July 2025 20:25:28 +0000 (0:00:15.251) 0:01:15.242 *********** 2025-07-06 20:25:58.029611 | orchestrator | 2025-07-06 20:25:58.029622 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-06 20:25:58.029640 | orchestrator | Sunday 06 July 2025 20:25:28 +0000 (0:00:00.063) 0:01:15.306 *********** 2025-07-06 20:25:58.029652 | orchestrator | 2025-07-06 20:25:58.029664 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-06 20:25:58.029675 | orchestrator | Sunday 06 July 2025 20:25:28 +0000 (0:00:00.062) 0:01:15.368 *********** 2025-07-06 20:25:58.029687 | orchestrator | 2025-07-06 20:25:58.029705 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-07-06 20:25:58.029718 | orchestrator | Sunday 06 July 2025 20:25:28 +0000 (0:00:00.063) 0:01:15.432 *********** 2025-07-06 20:25:58.029729 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:58.029741 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:58.029752 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:58.029764 | orchestrator | 2025-07-06 20:25:58.029775 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-07-06 20:25:58.029787 | orchestrator | Sunday 06 July 2025 20:25:41 +0000 (0:00:12.628) 0:01:28.060 *********** 2025-07-06 20:25:58.029798 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:25:58.029810 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:25:58.029821 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:25:58.029833 | orchestrator | 2025-07-06 20:25:58.029845 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:25:58.029857 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:25:58.029869 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:25:58.029881 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:25:58.029893 | orchestrator | 2025-07-06 20:25:58.029904 | orchestrator | 2025-07-06 20:25:58.029915 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:25:58.029927 | orchestrator | Sunday 06 July 2025 20:25:56 +0000 (0:00:15.344) 0:01:43.405 *********** 2025-07-06 20:25:58.029939 | orchestrator | =============================================================================== 2025-07-06 20:25:58.029950 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.34s 2025-07-06 20:25:58.029962 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.25s 2025-07-06 20:25:58.029973 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.63s 2025-07-06 20:25:58.029985 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.87s 2025-07-06 20:25:58.029996 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.77s 2025-07-06 20:25:58.030008 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.13s 2025-07-06 20:25:58.030077 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.93s 2025-07-06 20:25:58.030090 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.77s 2025-07-06 20:25:58.030101 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.52s 2025-07-06 20:25:58.030113 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.32s 2025-07-06 20:25:58.030124 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.18s 2025-07-06 20:25:58.030136 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.15s 2025-07-06 20:25:58.030147 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.02s 2025-07-06 20:25:58.030159 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.52s 2025-07-06 20:25:58.030170 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.36s 2025-07-06 20:25:58.030182 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.30s 2025-07-06 20:25:58.030201 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.24s 2025-07-06 20:25:58.030268 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.01s 2025-07-06 20:25:58.030280 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.57s 2025-07-06 20:25:58.030292 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.50s 2025-07-06 20:26:01.075068 | orchestrator | 2025-07-06 20:26:01 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:01.076068 | orchestrator | 2025-07-06 20:26:01 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:01.077421 | orchestrator | 2025-07-06 20:26:01 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:01.079932 | orchestrator | 2025-07-06 20:26:01 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:01.080392 | orchestrator | 2025-07-06 20:26:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:04.124984 | orchestrator | 2025-07-06 20:26:04 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:04.125561 | orchestrator | 2025-07-06 20:26:04 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:04.126957 | orchestrator | 2025-07-06 20:26:04 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:04.128787 | orchestrator | 2025-07-06 20:26:04 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:04.129092 | orchestrator | 2025-07-06 20:26:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:07.171786 | orchestrator | 2025-07-06 20:26:07 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:07.173988 | orchestrator | 2025-07-06 20:26:07 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:07.176156 | orchestrator | 2025-07-06 20:26:07 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:07.178321 | orchestrator | 2025-07-06 20:26:07 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:07.178346 | orchestrator | 2025-07-06 20:26:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:10.226702 | orchestrator | 2025-07-06 20:26:10 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:10.226807 | orchestrator | 2025-07-06 20:26:10 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:10.228888 | orchestrator | 2025-07-06 20:26:10 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:10.229411 | orchestrator | 2025-07-06 20:26:10 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:10.229436 | orchestrator | 2025-07-06 20:26:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:13.279129 | orchestrator | 2025-07-06 20:26:13 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:13.279662 | orchestrator | 2025-07-06 20:26:13 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:13.282759 | orchestrator | 2025-07-06 20:26:13 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:13.285131 | orchestrator | 2025-07-06 20:26:13 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:13.285154 | orchestrator | 2025-07-06 20:26:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:16.324714 | orchestrator | 2025-07-06 20:26:16 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:16.324896 | orchestrator | 2025-07-06 20:26:16 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:16.327286 | orchestrator | 2025-07-06 20:26:16 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:16.327587 | orchestrator | 2025-07-06 20:26:16 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:16.327728 | orchestrator | 2025-07-06 20:26:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:19.391776 | orchestrator | 2025-07-06 20:26:19 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:19.395532 | orchestrator | 2025-07-06 20:26:19 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:19.395594 | orchestrator | 2025-07-06 20:26:19 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:19.395615 | orchestrator | 2025-07-06 20:26:19 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:19.395635 | orchestrator | 2025-07-06 20:26:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:22.431908 | orchestrator | 2025-07-06 20:26:22 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:22.432334 | orchestrator | 2025-07-06 20:26:22 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:22.434300 | orchestrator | 2025-07-06 20:26:22 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:22.434990 | orchestrator | 2025-07-06 20:26:22 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:22.435032 | orchestrator | 2025-07-06 20:26:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:25.465165 | orchestrator | 2025-07-06 20:26:25 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:25.468466 | orchestrator | 2025-07-06 20:26:25 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:25.469579 | orchestrator | 2025-07-06 20:26:25 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:25.470831 | orchestrator | 2025-07-06 20:26:25 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:25.470888 | orchestrator | 2025-07-06 20:26:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:28.507454 | orchestrator | 2025-07-06 20:26:28 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:28.507808 | orchestrator | 2025-07-06 20:26:28 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:28.508431 | orchestrator | 2025-07-06 20:26:28 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:28.509476 | orchestrator | 2025-07-06 20:26:28 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:28.509510 | orchestrator | 2025-07-06 20:26:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:31.552904 | orchestrator | 2025-07-06 20:26:31 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:31.554772 | orchestrator | 2025-07-06 20:26:31 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:31.562397 | orchestrator | 2025-07-06 20:26:31 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:31.563381 | orchestrator | 2025-07-06 20:26:31 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:31.563891 | orchestrator | 2025-07-06 20:26:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:34.610834 | orchestrator | 2025-07-06 20:26:34 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:34.612664 | orchestrator | 2025-07-06 20:26:34 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:34.614572 | orchestrator | 2025-07-06 20:26:34 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:34.615427 | orchestrator | 2025-07-06 20:26:34 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:34.617552 | orchestrator | 2025-07-06 20:26:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:37.657327 | orchestrator | 2025-07-06 20:26:37 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:37.657808 | orchestrator | 2025-07-06 20:26:37 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:37.658396 | orchestrator | 2025-07-06 20:26:37 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:37.659565 | orchestrator | 2025-07-06 20:26:37 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:37.659624 | orchestrator | 2025-07-06 20:26:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:40.699559 | orchestrator | 2025-07-06 20:26:40 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:40.700566 | orchestrator | 2025-07-06 20:26:40 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:40.704026 | orchestrator | 2025-07-06 20:26:40 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:40.705542 | orchestrator | 2025-07-06 20:26:40 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:40.705568 | orchestrator | 2025-07-06 20:26:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:43.748941 | orchestrator | 2025-07-06 20:26:43 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:43.754873 | orchestrator | 2025-07-06 20:26:43 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:43.754909 | orchestrator | 2025-07-06 20:26:43 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:43.754922 | orchestrator | 2025-07-06 20:26:43 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:43.754933 | orchestrator | 2025-07-06 20:26:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:46.787021 | orchestrator | 2025-07-06 20:26:46 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:46.787484 | orchestrator | 2025-07-06 20:26:46 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:46.788010 | orchestrator | 2025-07-06 20:26:46 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:46.788698 | orchestrator | 2025-07-06 20:26:46 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:46.788722 | orchestrator | 2025-07-06 20:26:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:49.826912 | orchestrator | 2025-07-06 20:26:49 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:49.827020 | orchestrator | 2025-07-06 20:26:49 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:49.827062 | orchestrator | 2025-07-06 20:26:49 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:49.827075 | orchestrator | 2025-07-06 20:26:49 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:49.828415 | orchestrator | 2025-07-06 20:26:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:52.858970 | orchestrator | 2025-07-06 20:26:52 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:52.859087 | orchestrator | 2025-07-06 20:26:52 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:52.859506 | orchestrator | 2025-07-06 20:26:52 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:52.860623 | orchestrator | 2025-07-06 20:26:52 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:52.860651 | orchestrator | 2025-07-06 20:26:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:55.893585 | orchestrator | 2025-07-06 20:26:55 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:55.894737 | orchestrator | 2025-07-06 20:26:55 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:55.895574 | orchestrator | 2025-07-06 20:26:55 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:55.896966 | orchestrator | 2025-07-06 20:26:55 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:55.896995 | orchestrator | 2025-07-06 20:26:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:26:58.922645 | orchestrator | 2025-07-06 20:26:58 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:26:58.923936 | orchestrator | 2025-07-06 20:26:58 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:26:58.924518 | orchestrator | 2025-07-06 20:26:58 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:26:58.925122 | orchestrator | 2025-07-06 20:26:58 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:26:58.925147 | orchestrator | 2025-07-06 20:26:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:01.951319 | orchestrator | 2025-07-06 20:27:01 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:01.951518 | orchestrator | 2025-07-06 20:27:01 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:01.951929 | orchestrator | 2025-07-06 20:27:01 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:27:01.953886 | orchestrator | 2025-07-06 20:27:01 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:01.953910 | orchestrator | 2025-07-06 20:27:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:04.976692 | orchestrator | 2025-07-06 20:27:04 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:04.976867 | orchestrator | 2025-07-06 20:27:04 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:04.977448 | orchestrator | 2025-07-06 20:27:04 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:27:04.978641 | orchestrator | 2025-07-06 20:27:04 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:04.978678 | orchestrator | 2025-07-06 20:27:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:08.024729 | orchestrator | 2025-07-06 20:27:08 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:08.025079 | orchestrator | 2025-07-06 20:27:08 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:08.026067 | orchestrator | 2025-07-06 20:27:08 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:27:08.026891 | orchestrator | 2025-07-06 20:27:08 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:08.027045 | orchestrator | 2025-07-06 20:27:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:11.064993 | orchestrator | 2025-07-06 20:27:11 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:11.065590 | orchestrator | 2025-07-06 20:27:11 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:11.067050 | orchestrator | 2025-07-06 20:27:11 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state STARTED 2025-07-06 20:27:11.069469 | orchestrator | 2025-07-06 20:27:11 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:11.069508 | orchestrator | 2025-07-06 20:27:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:14.122213 | orchestrator | 2025-07-06 20:27:14 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:14.125775 | orchestrator | 2025-07-06 20:27:14 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:14.126764 | orchestrator | 2025-07-06 20:27:14 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:14.128093 | orchestrator | 2025-07-06 20:27:14 | INFO  | Task 6b315b10-b92d-4e09-8107-b73fa5016883 is in state SUCCESS 2025-07-06 20:27:14.129824 | orchestrator | 2025-07-06 20:27:14 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:14.129865 | orchestrator | 2025-07-06 20:27:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:17.172955 | orchestrator | 2025-07-06 20:27:17 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:17.173081 | orchestrator | 2025-07-06 20:27:17 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:17.177625 | orchestrator | 2025-07-06 20:27:17 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:17.178853 | orchestrator | 2025-07-06 20:27:17 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:17.179304 | orchestrator | 2025-07-06 20:27:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:20.231439 | orchestrator | 2025-07-06 20:27:20 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:20.234552 | orchestrator | 2025-07-06 20:27:20 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:20.236875 | orchestrator | 2025-07-06 20:27:20 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:20.238529 | orchestrator | 2025-07-06 20:27:20 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:20.238797 | orchestrator | 2025-07-06 20:27:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:23.278895 | orchestrator | 2025-07-06 20:27:23 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:23.281929 | orchestrator | 2025-07-06 20:27:23 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:23.283624 | orchestrator | 2025-07-06 20:27:23 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:23.285752 | orchestrator | 2025-07-06 20:27:23 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:23.285815 | orchestrator | 2025-07-06 20:27:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:26.337377 | orchestrator | 2025-07-06 20:27:26 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:26.338899 | orchestrator | 2025-07-06 20:27:26 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:26.342178 | orchestrator | 2025-07-06 20:27:26 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:26.343824 | orchestrator | 2025-07-06 20:27:26 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:26.343858 | orchestrator | 2025-07-06 20:27:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:29.394738 | orchestrator | 2025-07-06 20:27:29 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:29.399381 | orchestrator | 2025-07-06 20:27:29 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:29.403584 | orchestrator | 2025-07-06 20:27:29 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:29.408740 | orchestrator | 2025-07-06 20:27:29 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:29.408789 | orchestrator | 2025-07-06 20:27:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:32.445455 | orchestrator | 2025-07-06 20:27:32 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:32.448184 | orchestrator | 2025-07-06 20:27:32 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:32.449605 | orchestrator | 2025-07-06 20:27:32 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:32.450861 | orchestrator | 2025-07-06 20:27:32 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:32.451034 | orchestrator | 2025-07-06 20:27:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:35.492530 | orchestrator | 2025-07-06 20:27:35 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:35.495516 | orchestrator | 2025-07-06 20:27:35 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:35.496878 | orchestrator | 2025-07-06 20:27:35 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:35.498737 | orchestrator | 2025-07-06 20:27:35 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:35.498764 | orchestrator | 2025-07-06 20:27:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:38.543847 | orchestrator | 2025-07-06 20:27:38 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:38.544930 | orchestrator | 2025-07-06 20:27:38 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:38.547163 | orchestrator | 2025-07-06 20:27:38 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:38.548553 | orchestrator | 2025-07-06 20:27:38 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:38.548640 | orchestrator | 2025-07-06 20:27:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:41.598740 | orchestrator | 2025-07-06 20:27:41 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:41.599989 | orchestrator | 2025-07-06 20:27:41 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:41.601333 | orchestrator | 2025-07-06 20:27:41 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:41.603139 | orchestrator | 2025-07-06 20:27:41 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:41.603179 | orchestrator | 2025-07-06 20:27:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:44.648057 | orchestrator | 2025-07-06 20:27:44 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:44.649916 | orchestrator | 2025-07-06 20:27:44 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:44.651192 | orchestrator | 2025-07-06 20:27:44 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:44.652537 | orchestrator | 2025-07-06 20:27:44 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:44.652563 | orchestrator | 2025-07-06 20:27:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:47.691544 | orchestrator | 2025-07-06 20:27:47 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:47.691592 | orchestrator | 2025-07-06 20:27:47 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:47.693665 | orchestrator | 2025-07-06 20:27:47 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:47.695586 | orchestrator | 2025-07-06 20:27:47 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:47.695612 | orchestrator | 2025-07-06 20:27:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:50.735287 | orchestrator | 2025-07-06 20:27:50 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:50.735699 | orchestrator | 2025-07-06 20:27:50 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:50.736595 | orchestrator | 2025-07-06 20:27:50 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:50.737660 | orchestrator | 2025-07-06 20:27:50 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:50.737746 | orchestrator | 2025-07-06 20:27:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:53.778874 | orchestrator | 2025-07-06 20:27:53 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:53.779404 | orchestrator | 2025-07-06 20:27:53 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:53.783405 | orchestrator | 2025-07-06 20:27:53 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:53.784129 | orchestrator | 2025-07-06 20:27:53 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:53.784265 | orchestrator | 2025-07-06 20:27:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:56.820817 | orchestrator | 2025-07-06 20:27:56 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:56.821731 | orchestrator | 2025-07-06 20:27:56 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:56.822568 | orchestrator | 2025-07-06 20:27:56 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:56.823586 | orchestrator | 2025-07-06 20:27:56 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:56.823645 | orchestrator | 2025-07-06 20:27:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:27:59.864756 | orchestrator | 2025-07-06 20:27:59 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:27:59.868657 | orchestrator | 2025-07-06 20:27:59 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:27:59.869985 | orchestrator | 2025-07-06 20:27:59 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:27:59.872883 | orchestrator | 2025-07-06 20:27:59 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:27:59.872930 | orchestrator | 2025-07-06 20:27:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:02.915005 | orchestrator | 2025-07-06 20:28:02 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:02.916318 | orchestrator | 2025-07-06 20:28:02 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:02.917394 | orchestrator | 2025-07-06 20:28:02 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:28:02.918613 | orchestrator | 2025-07-06 20:28:02 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:02.918639 | orchestrator | 2025-07-06 20:28:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:05.955195 | orchestrator | 2025-07-06 20:28:05 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:05.955910 | orchestrator | 2025-07-06 20:28:05 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:05.957023 | orchestrator | 2025-07-06 20:28:05 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:28:05.957674 | orchestrator | 2025-07-06 20:28:05 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:05.957699 | orchestrator | 2025-07-06 20:28:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:09.001506 | orchestrator | 2025-07-06 20:28:08 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:09.001933 | orchestrator | 2025-07-06 20:28:08 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:09.002802 | orchestrator | 2025-07-06 20:28:09 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:28:09.005566 | orchestrator | 2025-07-06 20:28:09 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:09.005619 | orchestrator | 2025-07-06 20:28:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:12.062500 | orchestrator | 2025-07-06 20:28:12 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:12.064580 | orchestrator | 2025-07-06 20:28:12 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:12.066603 | orchestrator | 2025-07-06 20:28:12 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:28:12.067975 | orchestrator | 2025-07-06 20:28:12 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:12.068002 | orchestrator | 2025-07-06 20:28:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:15.116609 | orchestrator | 2025-07-06 20:28:15 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:15.117990 | orchestrator | 2025-07-06 20:28:15 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:15.120872 | orchestrator | 2025-07-06 20:28:15 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:28:15.122793 | orchestrator | 2025-07-06 20:28:15 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:15.122929 | orchestrator | 2025-07-06 20:28:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:18.173024 | orchestrator | 2025-07-06 20:28:18 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:18.174431 | orchestrator | 2025-07-06 20:28:18 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:18.175689 | orchestrator | 2025-07-06 20:28:18 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:28:18.177410 | orchestrator | 2025-07-06 20:28:18 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:18.177435 | orchestrator | 2025-07-06 20:28:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:21.218367 | orchestrator | 2025-07-06 20:28:21 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:21.221724 | orchestrator | 2025-07-06 20:28:21 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:21.225716 | orchestrator | 2025-07-06 20:28:21 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:28:21.226980 | orchestrator | 2025-07-06 20:28:21 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:21.227023 | orchestrator | 2025-07-06 20:28:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:24.263228 | orchestrator | 2025-07-06 20:28:24 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:24.265110 | orchestrator | 2025-07-06 20:28:24 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:24.267524 | orchestrator | 2025-07-06 20:28:24 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state STARTED 2025-07-06 20:28:24.269778 | orchestrator | 2025-07-06 20:28:24 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:24.269820 | orchestrator | 2025-07-06 20:28:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:27.316862 | orchestrator | 2025-07-06 20:28:27 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:27.318506 | orchestrator | 2025-07-06 20:28:27 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:27.320690 | orchestrator | 2025-07-06 20:28:27 | INFO  | Task 792cf3a1-6f83-4370-b18e-6dfa87240fb3 is in state SUCCESS 2025-07-06 20:28:27.323031 | orchestrator | 2025-07-06 20:28:27.323115 | orchestrator | 2025-07-06 20:28:27.323133 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-07-06 20:28:27.323147 | orchestrator | 2025-07-06 20:28:27.323159 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-07-06 20:28:27.323171 | orchestrator | Sunday 06 July 2025 20:22:57 +0000 (0:00:00.118) 0:00:00.118 *********** 2025-07-06 20:28:27.323182 | orchestrator | changed: [localhost] 2025-07-06 20:28:27.323195 | orchestrator | 2025-07-06 20:28:27.323206 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-07-06 20:28:27.323218 | orchestrator | Sunday 06 July 2025 20:22:57 +0000 (0:00:00.732) 0:00:00.851 *********** 2025-07-06 20:28:27.323228 | orchestrator | 2025-07-06 20:28:27.323265 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:28:27.323276 | orchestrator | 2025-07-06 20:28:27.323287 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:28:27.323298 | orchestrator | 2025-07-06 20:28:27.323309 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:28:27.323345 | orchestrator | 2025-07-06 20:28:27.323357 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-07-06 20:28:27.323368 | orchestrator | changed: [localhost] 2025-07-06 20:28:27.323379 | orchestrator | 2025-07-06 20:28:27.323390 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-07-06 20:28:27.323402 | orchestrator | Sunday 06 July 2025 20:26:59 +0000 (0:04:02.119) 0:04:02.970 *********** 2025-07-06 20:28:27.323412 | orchestrator | changed: [localhost] 2025-07-06 20:28:27.323424 | orchestrator | 2025-07-06 20:28:27.323434 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:28:27.323445 | orchestrator | 2025-07-06 20:28:27.323457 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:28:27.323482 | orchestrator | Sunday 06 July 2025 20:27:10 +0000 (0:00:11.137) 0:04:14.107 *********** 2025-07-06 20:28:27.323494 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:28:27.323505 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:28:27.323515 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:28:27.323526 | orchestrator | 2025-07-06 20:28:27.323537 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:28:27.323548 | orchestrator | Sunday 06 July 2025 20:27:11 +0000 (0:00:00.315) 0:04:14.423 *********** 2025-07-06 20:28:27.323561 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-07-06 20:28:27.323573 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-07-06 20:28:27.323586 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-07-06 20:28:27.323598 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-07-06 20:28:27.323610 | orchestrator | 2025-07-06 20:28:27.323623 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-07-06 20:28:27.323635 | orchestrator | skipping: no hosts matched 2025-07-06 20:28:27.323648 | orchestrator | 2025-07-06 20:28:27.323660 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:28:27.323674 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:28:27.323688 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:28:27.323700 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:28:27.323711 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:28:27.323722 | orchestrator | 2025-07-06 20:28:27.323733 | orchestrator | 2025-07-06 20:28:27.323744 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:28:27.323755 | orchestrator | Sunday 06 July 2025 20:27:11 +0000 (0:00:00.623) 0:04:15.047 *********** 2025-07-06 20:28:27.323766 | orchestrator | =============================================================================== 2025-07-06 20:28:27.323777 | orchestrator | Download ironic-agent initramfs --------------------------------------- 242.12s 2025-07-06 20:28:27.323788 | orchestrator | Download ironic-agent kernel ------------------------------------------- 11.14s 2025-07-06 20:28:27.323798 | orchestrator | Ensure the destination directory exists --------------------------------- 0.73s 2025-07-06 20:28:27.323809 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-07-06 20:28:27.323820 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-07-06 20:28:27.323830 | orchestrator | 2025-07-06 20:28:27.323841 | orchestrator | 2025-07-06 20:28:27.323852 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:28:27.323863 | orchestrator | 2025-07-06 20:28:27.323873 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:28:27.323884 | orchestrator | Sunday 06 July 2025 20:25:43 +0000 (0:00:00.276) 0:00:00.276 *********** 2025-07-06 20:28:27.323902 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:28:27.323913 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:28:27.323924 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:28:27.323935 | orchestrator | 2025-07-06 20:28:27.323946 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:28:27.323956 | orchestrator | Sunday 06 July 2025 20:25:44 +0000 (0:00:00.443) 0:00:00.719 *********** 2025-07-06 20:28:27.323967 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-07-06 20:28:27.323978 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-07-06 20:28:27.323989 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-07-06 20:28:27.324000 | orchestrator | 2025-07-06 20:28:27.324011 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-07-06 20:28:27.324021 | orchestrator | 2025-07-06 20:28:27.324032 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-06 20:28:27.324059 | orchestrator | Sunday 06 July 2025 20:25:45 +0000 (0:00:00.685) 0:00:01.405 *********** 2025-07-06 20:28:27.324071 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:28:27.324082 | orchestrator | 2025-07-06 20:28:27.324092 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-07-06 20:28:27.324103 | orchestrator | Sunday 06 July 2025 20:25:45 +0000 (0:00:00.534) 0:00:01.939 *********** 2025-07-06 20:28:27.324114 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-07-06 20:28:27.324125 | orchestrator | 2025-07-06 20:28:27.324135 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-07-06 20:28:27.324146 | orchestrator | Sunday 06 July 2025 20:25:49 +0000 (0:00:03.501) 0:00:05.441 *********** 2025-07-06 20:28:27.324157 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-07-06 20:28:27.324167 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-07-06 20:28:27.324178 | orchestrator | 2025-07-06 20:28:27.324189 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-07-06 20:28:27.324199 | orchestrator | Sunday 06 July 2025 20:25:55 +0000 (0:00:06.731) 0:00:12.172 *********** 2025-07-06 20:28:27.324210 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:28:27.324220 | orchestrator | 2025-07-06 20:28:27.324231 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-07-06 20:28:27.324325 | orchestrator | Sunday 06 July 2025 20:25:59 +0000 (0:00:03.185) 0:00:15.358 *********** 2025-07-06 20:28:27.324337 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:28:27.324355 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-07-06 20:28:27.324366 | orchestrator | 2025-07-06 20:28:27.324377 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-07-06 20:28:27.324387 | orchestrator | Sunday 06 July 2025 20:26:02 +0000 (0:00:03.833) 0:00:19.192 *********** 2025-07-06 20:28:27.324398 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:28:27.324409 | orchestrator | 2025-07-06 20:28:27.324419 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-07-06 20:28:27.324430 | orchestrator | Sunday 06 July 2025 20:26:05 +0000 (0:00:03.098) 0:00:22.290 *********** 2025-07-06 20:28:27.324441 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-07-06 20:28:27.324452 | orchestrator | 2025-07-06 20:28:27.324462 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-07-06 20:28:27.324473 | orchestrator | Sunday 06 July 2025 20:26:10 +0000 (0:00:04.079) 0:00:26.370 *********** 2025-07-06 20:28:27.324488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.324539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.324560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.324580 | orchestrator | 2025-07-06 20:28:27.324591 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-06 20:28:27.324602 | orchestrator | Sunday 06 July 2025 20:26:13 +0000 (0:00:03.084) 0:00:29.454 *********** 2025-07-06 20:28:27.324613 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:28:27.324624 | orchestrator | 2025-07-06 20:28:27.324635 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-07-06 20:28:27.324646 | orchestrator | Sunday 06 July 2025 20:26:13 +0000 (0:00:00.642) 0:00:30.096 *********** 2025-07-06 20:28:27.324657 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:27.324667 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:28:27.324678 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:28:27.324689 | orchestrator | 2025-07-06 20:28:27.324699 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-07-06 20:28:27.324710 | orchestrator | Sunday 06 July 2025 20:26:17 +0000 (0:00:03.633) 0:00:33.730 *********** 2025-07-06 20:28:27.324721 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:28:27.324732 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:28:27.324743 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:28:27.324753 | orchestrator | 2025-07-06 20:28:27.324764 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-07-06 20:28:27.324781 | orchestrator | Sunday 06 July 2025 20:26:18 +0000 (0:00:01.539) 0:00:35.269 *********** 2025-07-06 20:28:27.324792 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:28:27.324803 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:28:27.324814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:28:27.324825 | orchestrator | 2025-07-06 20:28:27.324836 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-07-06 20:28:27.324847 | orchestrator | Sunday 06 July 2025 20:26:20 +0000 (0:00:01.171) 0:00:36.441 *********** 2025-07-06 20:28:27.324858 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:28:27.324868 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:28:27.324879 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:28:27.324890 | orchestrator | 2025-07-06 20:28:27.324900 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-07-06 20:28:27.324911 | orchestrator | Sunday 06 July 2025 20:26:20 +0000 (0:00:00.820) 0:00:37.261 *********** 2025-07-06 20:28:27.324922 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.324933 | orchestrator | 2025-07-06 20:28:27.324943 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-07-06 20:28:27.324954 | orchestrator | Sunday 06 July 2025 20:26:21 +0000 (0:00:00.153) 0:00:37.415 *********** 2025-07-06 20:28:27.324965 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.324982 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.324993 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.325003 | orchestrator | 2025-07-06 20:28:27.325014 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-06 20:28:27.325029 | orchestrator | Sunday 06 July 2025 20:26:21 +0000 (0:00:00.283) 0:00:37.699 *********** 2025-07-06 20:28:27.325041 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:28:27.325052 | orchestrator | 2025-07-06 20:28:27.325063 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-07-06 20:28:27.325073 | orchestrator | Sunday 06 July 2025 20:26:21 +0000 (0:00:00.542) 0:00:38.241 *********** 2025-07-06 20:28:27.325086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.325107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.325132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.325144 | orchestrator | 2025-07-06 20:28:27.325156 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-07-06 20:28:27.325167 | orchestrator | Sunday 06 July 2025 20:26:26 +0000 (0:00:04.567) 0:00:42.809 *********** 2025-07-06 20:28:27.325186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:28:27.325199 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.325216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:28:27.325256 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.325269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:28:27.325281 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.325292 | orchestrator | 2025-07-06 20:28:27.325303 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-07-06 20:28:27.325314 | orchestrator | Sunday 06 July 2025 20:26:29 +0000 (0:00:03.257) 0:00:46.066 *********** 2025-07-06 20:28:27.325339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:28:27.325366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:28:27.325379 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.325390 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.325410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-06 20:28:27.325428 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.325439 | orchestrator | 2025-07-06 20:28:27.325450 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-07-06 20:28:27.325470 | orchestrator | Sunday 06 July 2025 20:26:33 +0000 (0:00:03.508) 0:00:49.575 *********** 2025-07-06 20:28:27.325489 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.325504 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.325522 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.325540 | orchestrator | 2025-07-06 20:28:27.325558 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-07-06 20:28:27.325576 | orchestrator | Sunday 06 July 2025 20:26:36 +0000 (0:00:03.511) 0:00:53.086 *********** 2025-07-06 20:28:27.325590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.325894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.325936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.325949 | orchestrator | 2025-07-06 20:28:27.325960 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-07-06 20:28:27.325971 | orchestrator | Sunday 06 July 2025 20:26:41 +0000 (0:00:04.814) 0:00:57.901 *********** 2025-07-06 20:28:27.325982 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:27.325993 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:28:27.326003 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:28:27.326014 | orchestrator | 2025-07-06 20:28:27.326083 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-07-06 20:28:27.326095 | orchestrator | Sunday 06 July 2025 20:26:47 +0000 (0:00:06.315) 0:01:04.217 *********** 2025-07-06 20:28:27.326105 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.326116 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.326127 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.326137 | orchestrator | 2025-07-06 20:28:27.326148 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-07-06 20:28:27.326159 | orchestrator | Sunday 06 July 2025 20:26:53 +0000 (0:00:05.627) 0:01:09.844 *********** 2025-07-06 20:28:27.326169 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.326180 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.326191 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.326209 | orchestrator | 2025-07-06 20:28:27.326220 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-07-06 20:28:27.326230 | orchestrator | Sunday 06 July 2025 20:26:58 +0000 (0:00:05.051) 0:01:14.895 *********** 2025-07-06 20:28:27.326275 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.326287 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.326298 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.326308 | orchestrator | 2025-07-06 20:28:27.326320 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-07-06 20:28:27.326331 | orchestrator | Sunday 06 July 2025 20:27:02 +0000 (0:00:03.764) 0:01:18.659 *********** 2025-07-06 20:28:27.326341 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.326352 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.326363 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.326373 | orchestrator | 2025-07-06 20:28:27.326384 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-07-06 20:28:27.326395 | orchestrator | Sunday 06 July 2025 20:27:05 +0000 (0:00:03.504) 0:01:22.164 *********** 2025-07-06 20:28:27.326416 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.326428 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.326438 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.326449 | orchestrator | 2025-07-06 20:28:27.326459 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-07-06 20:28:27.326470 | orchestrator | Sunday 06 July 2025 20:27:06 +0000 (0:00:00.271) 0:01:22.435 *********** 2025-07-06 20:28:27.326481 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-06 20:28:27.326492 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.326503 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-06 20:28:27.326516 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.326528 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-06 20:28:27.326540 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.326552 | orchestrator | 2025-07-06 20:28:27.326563 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-07-06 20:28:27.326575 | orchestrator | Sunday 06 July 2025 20:27:08 +0000 (0:00:02.784) 0:01:25.220 *********** 2025-07-06 20:28:27.326595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.326624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.326645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-06 20:28:27.326659 | orchestrator | 2025-07-06 20:28:27.326671 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-06 20:28:27.326683 | orchestrator | Sunday 06 July 2025 20:27:12 +0000 (0:00:03.746) 0:01:28.966 *********** 2025-07-06 20:28:27.326696 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:27.326708 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:27.326720 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:27.326731 | orchestrator | 2025-07-06 20:28:27.326741 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-07-06 20:28:27.326759 | orchestrator | Sunday 06 July 2025 20:27:12 +0000 (0:00:00.297) 0:01:29.263 *********** 2025-07-06 20:28:27.326769 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:27.326780 | orchestrator | 2025-07-06 20:28:27.326791 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-07-06 20:28:27.326802 | orchestrator | Sunday 06 July 2025 20:27:15 +0000 (0:00:02.154) 0:01:31.418 *********** 2025-07-06 20:28:27.326812 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:27.326823 | orchestrator | 2025-07-06 20:28:27.326834 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-07-06 20:28:27.326845 | orchestrator | Sunday 06 July 2025 20:27:17 +0000 (0:00:02.085) 0:01:33.504 *********** 2025-07-06 20:28:27.326855 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:27.326866 | orchestrator | 2025-07-06 20:28:27.326877 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-07-06 20:28:27.326888 | orchestrator | Sunday 06 July 2025 20:27:19 +0000 (0:00:02.037) 0:01:35.541 *********** 2025-07-06 20:28:27.326899 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:27.326909 | orchestrator | 2025-07-06 20:28:27.326920 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-07-06 20:28:27.326931 | orchestrator | Sunday 06 July 2025 20:27:45 +0000 (0:00:25.959) 0:02:01.501 *********** 2025-07-06 20:28:27.326942 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:27.326953 | orchestrator | 2025-07-06 20:28:27.326963 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-06 20:28:27.326974 | orchestrator | Sunday 06 July 2025 20:27:47 +0000 (0:00:02.601) 0:02:04.102 *********** 2025-07-06 20:28:27.326985 | orchestrator | 2025-07-06 20:28:27.326996 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-06 20:28:27.327007 | orchestrator | Sunday 06 July 2025 20:27:47 +0000 (0:00:00.064) 0:02:04.167 *********** 2025-07-06 20:28:27.327018 | orchestrator | 2025-07-06 20:28:27.327028 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-06 20:28:27.327039 | orchestrator | Sunday 06 July 2025 20:27:47 +0000 (0:00:00.062) 0:02:04.230 *********** 2025-07-06 20:28:27.327050 | orchestrator | 2025-07-06 20:28:27.327061 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-07-06 20:28:27.327072 | orchestrator | Sunday 06 July 2025 20:27:47 +0000 (0:00:00.067) 0:02:04.297 *********** 2025-07-06 20:28:27.327082 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:27.327093 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:28:27.327104 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:28:27.327115 | orchestrator | 2025-07-06 20:28:27.327126 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:28:27.327143 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-06 20:28:27.327155 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:28:27.327165 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:28:27.327176 | orchestrator | 2025-07-06 20:28:27.327187 | orchestrator | 2025-07-06 20:28:27.327198 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:28:27.327209 | orchestrator | Sunday 06 July 2025 20:28:26 +0000 (0:00:38.353) 0:02:42.651 *********** 2025-07-06 20:28:27.327220 | orchestrator | =============================================================================== 2025-07-06 20:28:27.327230 | orchestrator | glance : Restart glance-api container ---------------------------------- 38.35s 2025-07-06 20:28:27.327262 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.96s 2025-07-06 20:28:27.327273 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.73s 2025-07-06 20:28:27.327300 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.32s 2025-07-06 20:28:27.327311 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.63s 2025-07-06 20:28:27.327322 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.05s 2025-07-06 20:28:27.327333 | orchestrator | glance : Copying over config.json files for services -------------------- 4.81s 2025-07-06 20:28:27.327349 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.57s 2025-07-06 20:28:27.327360 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.08s 2025-07-06 20:28:27.327371 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.83s 2025-07-06 20:28:27.327382 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.76s 2025-07-06 20:28:27.327393 | orchestrator | glance : Check glance containers ---------------------------------------- 3.75s 2025-07-06 20:28:27.327404 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.63s 2025-07-06 20:28:27.327415 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.51s 2025-07-06 20:28:27.327426 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.51s 2025-07-06 20:28:27.327437 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.50s 2025-07-06 20:28:27.327447 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.50s 2025-07-06 20:28:27.327458 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.26s 2025-07-06 20:28:27.327470 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.19s 2025-07-06 20:28:27.327480 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.10s 2025-07-06 20:28:27.327520 | orchestrator | 2025-07-06 20:28:27 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:27.327533 | orchestrator | 2025-07-06 20:28:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:30.382418 | orchestrator | 2025-07-06 20:28:30 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:30.386321 | orchestrator | 2025-07-06 20:28:30 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:30.387086 | orchestrator | 2025-07-06 20:28:30 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:28:30.388390 | orchestrator | 2025-07-06 20:28:30 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:30.388426 | orchestrator | 2025-07-06 20:28:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:33.444776 | orchestrator | 2025-07-06 20:28:33 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:33.446138 | orchestrator | 2025-07-06 20:28:33 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:33.447968 | orchestrator | 2025-07-06 20:28:33 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:28:33.449459 | orchestrator | 2025-07-06 20:28:33 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:33.449513 | orchestrator | 2025-07-06 20:28:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:36.493512 | orchestrator | 2025-07-06 20:28:36 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:36.496682 | orchestrator | 2025-07-06 20:28:36 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:36.498782 | orchestrator | 2025-07-06 20:28:36 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:28:36.500792 | orchestrator | 2025-07-06 20:28:36 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:36.500902 | orchestrator | 2025-07-06 20:28:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:39.553283 | orchestrator | 2025-07-06 20:28:39 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:39.555825 | orchestrator | 2025-07-06 20:28:39 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:39.558818 | orchestrator | 2025-07-06 20:28:39 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:28:39.562067 | orchestrator | 2025-07-06 20:28:39 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:39.562103 | orchestrator | 2025-07-06 20:28:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:42.611622 | orchestrator | 2025-07-06 20:28:42 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:42.611720 | orchestrator | 2025-07-06 20:28:42 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:42.611734 | orchestrator | 2025-07-06 20:28:42 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:28:42.612312 | orchestrator | 2025-07-06 20:28:42 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:42.612370 | orchestrator | 2025-07-06 20:28:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:45.657131 | orchestrator | 2025-07-06 20:28:45 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:45.658894 | orchestrator | 2025-07-06 20:28:45 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:45.661995 | orchestrator | 2025-07-06 20:28:45 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:28:45.666108 | orchestrator | 2025-07-06 20:28:45 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:45.666191 | orchestrator | 2025-07-06 20:28:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:48.713521 | orchestrator | 2025-07-06 20:28:48 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:48.715197 | orchestrator | 2025-07-06 20:28:48 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:48.718230 | orchestrator | 2025-07-06 20:28:48 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:28:48.722290 | orchestrator | 2025-07-06 20:28:48 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:48.722333 | orchestrator | 2025-07-06 20:28:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:51.769363 | orchestrator | 2025-07-06 20:28:51 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:51.771316 | orchestrator | 2025-07-06 20:28:51 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:51.776064 | orchestrator | 2025-07-06 20:28:51 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:28:51.782765 | orchestrator | 2025-07-06 20:28:51 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state STARTED 2025-07-06 20:28:51.782816 | orchestrator | 2025-07-06 20:28:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:54.836668 | orchestrator | 2025-07-06 20:28:54 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:54.839045 | orchestrator | 2025-07-06 20:28:54 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:28:54.840913 | orchestrator | 2025-07-06 20:28:54 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:54.842268 | orchestrator | 2025-07-06 20:28:54 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:28:54.846492 | orchestrator | 2025-07-06 20:28:54 | INFO  | Task 5f58d1dc-9694-446d-9a1a-f894ac7a84c7 is in state SUCCESS 2025-07-06 20:28:54.848845 | orchestrator | 2025-07-06 20:28:54.848915 | orchestrator | 2025-07-06 20:28:54.848936 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:28:54.848956 | orchestrator | 2025-07-06 20:28:54.848973 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:28:54.848992 | orchestrator | Sunday 06 July 2025 20:25:49 +0000 (0:00:00.262) 0:00:00.262 *********** 2025-07-06 20:28:54.849013 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:28:54.849032 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:28:54.849050 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:28:54.849068 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:28:54.849087 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:28:54.849174 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:28:54.849369 | orchestrator | 2025-07-06 20:28:54.849390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:28:54.849409 | orchestrator | Sunday 06 July 2025 20:25:49 +0000 (0:00:00.661) 0:00:00.924 *********** 2025-07-06 20:28:54.849426 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-07-06 20:28:54.849443 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-07-06 20:28:54.849459 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-07-06 20:28:54.849476 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-07-06 20:28:54.849493 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-07-06 20:28:54.849510 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-07-06 20:28:54.849526 | orchestrator | 2025-07-06 20:28:54.849543 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-07-06 20:28:54.849562 | orchestrator | 2025-07-06 20:28:54.849579 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-06 20:28:54.849597 | orchestrator | Sunday 06 July 2025 20:25:50 +0000 (0:00:00.592) 0:00:01.517 *********** 2025-07-06 20:28:54.849615 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:28:54.849634 | orchestrator | 2025-07-06 20:28:54.849667 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-07-06 20:28:54.849718 | orchestrator | Sunday 06 July 2025 20:25:51 +0000 (0:00:01.135) 0:00:02.652 *********** 2025-07-06 20:28:54.849737 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-07-06 20:28:54.849753 | orchestrator | 2025-07-06 20:28:54.849770 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-07-06 20:28:54.849807 | orchestrator | Sunday 06 July 2025 20:25:54 +0000 (0:00:03.399) 0:00:06.051 *********** 2025-07-06 20:28:54.849825 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-07-06 20:28:54.849841 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-07-06 20:28:54.849857 | orchestrator | 2025-07-06 20:28:54.849873 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-07-06 20:28:54.849888 | orchestrator | Sunday 06 July 2025 20:26:01 +0000 (0:00:06.437) 0:00:12.489 *********** 2025-07-06 20:28:54.849905 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:28:54.849922 | orchestrator | 2025-07-06 20:28:54.849937 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-07-06 20:28:54.849955 | orchestrator | Sunday 06 July 2025 20:26:04 +0000 (0:00:03.054) 0:00:15.543 *********** 2025-07-06 20:28:54.849997 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:28:54.850085 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-07-06 20:28:54.850112 | orchestrator | 2025-07-06 20:28:54.850133 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-07-06 20:28:54.850151 | orchestrator | Sunday 06 July 2025 20:26:08 +0000 (0:00:03.920) 0:00:19.464 *********** 2025-07-06 20:28:54.850170 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:28:54.850188 | orchestrator | 2025-07-06 20:28:54.850206 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-07-06 20:28:54.850224 | orchestrator | Sunday 06 July 2025 20:26:11 +0000 (0:00:03.304) 0:00:22.768 *********** 2025-07-06 20:28:54.850265 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-07-06 20:28:54.850284 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-07-06 20:28:54.850302 | orchestrator | 2025-07-06 20:28:54.850319 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-07-06 20:28:54.850337 | orchestrator | Sunday 06 July 2025 20:26:19 +0000 (0:00:07.366) 0:00:30.135 *********** 2025-07-06 20:28:54.850360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.850410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.850440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.850460 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.850493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.850513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.850543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.850562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.850588 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.850616 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.850636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.850655 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.850672 | orchestrator | 2025-07-06 20:28:54.850699 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-06 20:28:54.850718 | orchestrator | Sunday 06 July 2025 20:26:20 +0000 (0:00:01.857) 0:00:31.992 *********** 2025-07-06 20:28:54.850735 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.850752 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:54.850769 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:54.850785 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:28:54.850802 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:28:54.850819 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:28:54.850836 | orchestrator | 2025-07-06 20:28:54.850853 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-06 20:28:54.850870 | orchestrator | Sunday 06 July 2025 20:26:21 +0000 (0:00:00.566) 0:00:32.559 *********** 2025-07-06 20:28:54.850886 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.850902 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:54.850919 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:54.850937 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:28:54.850954 | orchestrator | 2025-07-06 20:28:54.850971 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-07-06 20:28:54.850987 | orchestrator | Sunday 06 July 2025 20:26:22 +0000 (0:00:00.924) 0:00:33.483 *********** 2025-07-06 20:28:54.851004 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-07-06 20:28:54.851032 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-07-06 20:28:54.851049 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-07-06 20:28:54.851065 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-07-06 20:28:54.851082 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-07-06 20:28:54.851100 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-07-06 20:28:54.851117 | orchestrator | 2025-07-06 20:28:54.851133 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-07-06 20:28:54.851149 | orchestrator | Sunday 06 July 2025 20:26:24 +0000 (0:00:02.011) 0:00:35.495 *********** 2025-07-06 20:28:54.851174 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:28:54.851195 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:28:54.851216 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:28:54.851288 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:28:54.851311 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:28:54.851352 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-06 20:28:54.851371 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:28:54.851383 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:28:54.851438 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:28:54.851458 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:28:54.851476 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:28:54.851488 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-06 20:28:54.851499 | orchestrator | 2025-07-06 20:28:54.851510 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-07-06 20:28:54.851522 | orchestrator | Sunday 06 July 2025 20:26:28 +0000 (0:00:03.802) 0:00:39.298 *********** 2025-07-06 20:28:54.851533 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:28:54.851545 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:28:54.851556 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-06 20:28:54.851567 | orchestrator | 2025-07-06 20:28:54.851576 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-07-06 20:28:54.851586 | orchestrator | Sunday 06 July 2025 20:26:30 +0000 (0:00:02.243) 0:00:41.541 *********** 2025-07-06 20:28:54.851595 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-07-06 20:28:54.851605 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-07-06 20:28:54.851614 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-07-06 20:28:54.851623 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:28:54.851633 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:28:54.851647 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-07-06 20:28:54.851664 | orchestrator | 2025-07-06 20:28:54.851673 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-07-06 20:28:54.851683 | orchestrator | Sunday 06 July 2025 20:26:33 +0000 (0:00:03.264) 0:00:44.806 *********** 2025-07-06 20:28:54.851692 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-07-06 20:28:54.851702 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-07-06 20:28:54.851711 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-07-06 20:28:54.851721 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-07-06 20:28:54.851730 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-07-06 20:28:54.851740 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-07-06 20:28:54.851749 | orchestrator | 2025-07-06 20:28:54.851759 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-07-06 20:28:54.851768 | orchestrator | Sunday 06 July 2025 20:26:34 +0000 (0:00:01.051) 0:00:45.858 *********** 2025-07-06 20:28:54.851777 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.851787 | orchestrator | 2025-07-06 20:28:54.851796 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-07-06 20:28:54.851806 | orchestrator | Sunday 06 July 2025 20:26:34 +0000 (0:00:00.131) 0:00:45.990 *********** 2025-07-06 20:28:54.851816 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.851825 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:54.851834 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:54.851848 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:28:54.851864 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:28:54.851878 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:28:54.851892 | orchestrator | 2025-07-06 20:28:54.851907 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-06 20:28:54.851923 | orchestrator | Sunday 06 July 2025 20:26:35 +0000 (0:00:00.832) 0:00:46.823 *********** 2025-07-06 20:28:54.851942 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:28:54.851959 | orchestrator | 2025-07-06 20:28:54.851976 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-07-06 20:28:54.851986 | orchestrator | Sunday 06 July 2025 20:26:37 +0000 (0:00:01.327) 0:00:48.150 *********** 2025-07-06 20:28:54.852008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.852019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.852043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.852054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852065 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852090 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852106 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852137 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852151 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852171 | orchestrator | 2025-07-06 20:28:54.852181 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-07-06 20:28:54.852191 | orchestrator | Sunday 06 July 2025 20:26:40 +0000 (0:00:03.559) 0:00:51.710 *********** 2025-07-06 20:28:54.852201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:28:54.852222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:28:54.852309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:28:54.852335 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.852345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852362 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:54.852371 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:54.852382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852398 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852408 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:28:54.852418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852433 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852443 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:28:54.852453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852479 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:28:54.852488 | orchestrator | 2025-07-06 20:28:54.852498 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-07-06 20:28:54.852508 | orchestrator | Sunday 06 July 2025 20:26:41 +0000 (0:00:01.158) 0:00:52.869 *********** 2025-07-06 20:28:54.852524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:28:54.852534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:28:54.852565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852574 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.852584 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:54.852594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:28:54.852612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852622 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:54.852632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852656 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:28:54.852672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852692 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:28:54.852707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.852727 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:28:54.852737 | orchestrator | 2025-07-06 20:28:54.852747 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-07-06 20:28:54.852756 | orchestrator | Sunday 06 July 2025 20:26:43 +0000 (0:00:02.000) 0:00:54.869 *********** 2025-07-06 20:28:54.852771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.852790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.852800 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.852828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852895 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.852936 | orchestrator | 2025-07-06 20:28:54.852946 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-07-06 20:28:54.852956 | orchestrator | Sunday 06 July 2025 20:26:47 +0000 (0:00:03.253) 0:00:58.123 *********** 2025-07-06 20:28:54.852966 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-06 20:28:54.852975 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:28:54.852985 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-06 20:28:54.852995 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:28:54.853004 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-06 20:28:54.853014 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:28:54.853024 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-06 20:28:54.853033 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-06 20:28:54.853043 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-06 20:28:54.853052 | orchestrator | 2025-07-06 20:28:54.853062 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-07-06 20:28:54.853072 | orchestrator | Sunday 06 July 2025 20:26:49 +0000 (0:00:01.998) 0:01:00.122 *********** 2025-07-06 20:28:54.853081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.853097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.853108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.853154 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853205 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853225 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853252 | orchestrator | 2025-07-06 20:28:54.853263 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-07-06 20:28:54.853272 | orchestrator | Sunday 06 July 2025 20:26:59 +0000 (0:00:09.991) 0:01:10.113 *********** 2025-07-06 20:28:54.853287 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:54.853297 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.853307 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:54.853317 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:28:54.853326 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:28:54.853336 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:28:54.853351 | orchestrator | 2025-07-06 20:28:54.853361 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-07-06 20:28:54.853370 | orchestrator | Sunday 06 July 2025 20:27:01 +0000 (0:00:02.713) 0:01:12.826 *********** 2025-07-06 20:28:54.853380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:28:54.853395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.853405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:28:54.853415 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.853425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.853435 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:54.853451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-06 20:28:54.853468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.853478 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:54.853488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.853503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.853513 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:28:54.853523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.853533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.853548 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:28:54.853630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.853643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-06 20:28:54.853653 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:28:54.853663 | orchestrator | 2025-07-06 20:28:54.853673 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-07-06 20:28:54.853682 | orchestrator | Sunday 06 July 2025 20:27:02 +0000 (0:00:01.181) 0:01:14.008 *********** 2025-07-06 20:28:54.853692 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.853707 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:54.853716 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:54.853726 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:28:54.853735 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:28:54.853745 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:28:54.853754 | orchestrator | 2025-07-06 20:28:54.853764 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-07-06 20:28:54.853774 | orchestrator | Sunday 06 July 2025 20:27:03 +0000 (0:00:00.774) 0:01:14.782 *********** 2025-07-06 20:28:54.853784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853794 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.853830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.853844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-06 20:28:54.853855 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853889 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853915 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:28:54.853945 | orchestrator | 2025-07-06 20:28:54.853955 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-06 20:28:54.853970 | orchestrator | Sunday 06 July 2025 20:27:06 +0000 (0:00:02.331) 0:01:17.114 *********** 2025-07-06 20:28:54.853980 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.853990 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:28:54.853999 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:28:54.854009 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:28:54.854047 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:28:54.854059 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:28:54.854068 | orchestrator | 2025-07-06 20:28:54.854078 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-07-06 20:28:54.854087 | orchestrator | Sunday 06 July 2025 20:27:06 +0000 (0:00:00.657) 0:01:17.771 *********** 2025-07-06 20:28:54.854097 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:54.854106 | orchestrator | 2025-07-06 20:28:54.854116 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-07-06 20:28:54.854125 | orchestrator | Sunday 06 July 2025 20:27:08 +0000 (0:00:02.004) 0:01:19.776 *********** 2025-07-06 20:28:54.854134 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:54.854144 | orchestrator | 2025-07-06 20:28:54.854154 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-07-06 20:28:54.854163 | orchestrator | Sunday 06 July 2025 20:27:11 +0000 (0:00:02.330) 0:01:22.106 *********** 2025-07-06 20:28:54.854173 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:54.854182 | orchestrator | 2025-07-06 20:28:54.854191 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:28:54.854201 | orchestrator | Sunday 06 July 2025 20:27:29 +0000 (0:00:18.688) 0:01:40.794 *********** 2025-07-06 20:28:54.854211 | orchestrator | 2025-07-06 20:28:54.854226 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:28:54.854254 | orchestrator | Sunday 06 July 2025 20:27:29 +0000 (0:00:00.068) 0:01:40.862 *********** 2025-07-06 20:28:54.854264 | orchestrator | 2025-07-06 20:28:54.854274 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:28:54.854284 | orchestrator | Sunday 06 July 2025 20:27:29 +0000 (0:00:00.065) 0:01:40.928 *********** 2025-07-06 20:28:54.854293 | orchestrator | 2025-07-06 20:28:54.854303 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:28:54.854312 | orchestrator | Sunday 06 July 2025 20:27:29 +0000 (0:00:00.063) 0:01:40.992 *********** 2025-07-06 20:28:54.854322 | orchestrator | 2025-07-06 20:28:54.854331 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:28:54.854341 | orchestrator | Sunday 06 July 2025 20:27:29 +0000 (0:00:00.064) 0:01:41.057 *********** 2025-07-06 20:28:54.854350 | orchestrator | 2025-07-06 20:28:54.854359 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-06 20:28:54.854369 | orchestrator | Sunday 06 July 2025 20:27:30 +0000 (0:00:00.064) 0:01:41.121 *********** 2025-07-06 20:28:54.854378 | orchestrator | 2025-07-06 20:28:54.854388 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-07-06 20:28:54.854397 | orchestrator | Sunday 06 July 2025 20:27:30 +0000 (0:00:00.065) 0:01:41.187 *********** 2025-07-06 20:28:54.854409 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:54.854426 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:28:54.854441 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:28:54.854456 | orchestrator | 2025-07-06 20:28:54.854472 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-07-06 20:28:54.854488 | orchestrator | Sunday 06 July 2025 20:27:51 +0000 (0:00:21.589) 0:02:02.777 *********** 2025-07-06 20:28:54.854502 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:28:54.854518 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:28:54.854532 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:28:54.854546 | orchestrator | 2025-07-06 20:28:54.854562 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-07-06 20:28:54.854579 | orchestrator | Sunday 06 July 2025 20:28:03 +0000 (0:00:11.550) 0:02:14.327 *********** 2025-07-06 20:28:54.854607 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:28:54.854622 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:28:54.854632 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:28:54.854642 | orchestrator | 2025-07-06 20:28:54.854652 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-07-06 20:28:54.854667 | orchestrator | Sunday 06 July 2025 20:28:41 +0000 (0:00:37.803) 0:02:52.130 *********** 2025-07-06 20:28:54.854677 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:28:54.854686 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:28:54.854696 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:28:54.854705 | orchestrator | 2025-07-06 20:28:54.854715 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-07-06 20:28:54.854724 | orchestrator | Sunday 06 July 2025 20:28:51 +0000 (0:00:10.568) 0:03:02.699 *********** 2025-07-06 20:28:54.854734 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:28:54.854743 | orchestrator | 2025-07-06 20:28:54.854752 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:28:54.854762 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-06 20:28:54.854773 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-06 20:28:54.854783 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-06 20:28:54.854792 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:28:54.854802 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:28:54.854812 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-06 20:28:54.854821 | orchestrator | 2025-07-06 20:28:54.854831 | orchestrator | 2025-07-06 20:28:54.854842 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:28:54.854860 | orchestrator | Sunday 06 July 2025 20:28:52 +0000 (0:00:00.624) 0:03:03.324 *********** 2025-07-06 20:28:54.854875 | orchestrator | =============================================================================== 2025-07-06 20:28:54.854892 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 37.80s 2025-07-06 20:28:54.854907 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.59s 2025-07-06 20:28:54.854923 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.69s 2025-07-06 20:28:54.854939 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.55s 2025-07-06 20:28:54.854956 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.57s 2025-07-06 20:28:54.854965 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.99s 2025-07-06 20:28:54.854975 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.37s 2025-07-06 20:28:54.854984 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.44s 2025-07-06 20:28:54.855002 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.92s 2025-07-06 20:28:54.855011 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.80s 2025-07-06 20:28:54.855021 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.56s 2025-07-06 20:28:54.855030 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.40s 2025-07-06 20:28:54.855040 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.30s 2025-07-06 20:28:54.855056 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.26s 2025-07-06 20:28:54.855066 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.25s 2025-07-06 20:28:54.855075 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.05s 2025-07-06 20:28:54.855085 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 2.71s 2025-07-06 20:28:54.855094 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.33s 2025-07-06 20:28:54.855103 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.33s 2025-07-06 20:28:54.855113 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.24s 2025-07-06 20:28:54.855122 | orchestrator | 2025-07-06 20:28:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:28:57.901947 | orchestrator | 2025-07-06 20:28:57 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:28:57.904733 | orchestrator | 2025-07-06 20:28:57 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:28:57.906836 | orchestrator | 2025-07-06 20:28:57 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:28:57.908564 | orchestrator | 2025-07-06 20:28:57 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:28:57.909256 | orchestrator | 2025-07-06 20:28:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:00.946858 | orchestrator | 2025-07-06 20:29:00 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:29:00.950520 | orchestrator | 2025-07-06 20:29:00 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:00.951161 | orchestrator | 2025-07-06 20:29:00 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:00.952446 | orchestrator | 2025-07-06 20:29:00 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:00.952476 | orchestrator | 2025-07-06 20:29:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:03.997378 | orchestrator | 2025-07-06 20:29:03 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:29:03.998335 | orchestrator | 2025-07-06 20:29:03 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:03.999272 | orchestrator | 2025-07-06 20:29:03 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:03.999975 | orchestrator | 2025-07-06 20:29:03 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:03.999998 | orchestrator | 2025-07-06 20:29:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:07.054295 | orchestrator | 2025-07-06 20:29:07 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:29:07.055942 | orchestrator | 2025-07-06 20:29:07 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:07.057754 | orchestrator | 2025-07-06 20:29:07 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:07.059569 | orchestrator | 2025-07-06 20:29:07 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:07.059608 | orchestrator | 2025-07-06 20:29:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:10.096461 | orchestrator | 2025-07-06 20:29:10 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:29:10.096563 | orchestrator | 2025-07-06 20:29:10 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:10.097331 | orchestrator | 2025-07-06 20:29:10 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:10.098973 | orchestrator | 2025-07-06 20:29:10 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:10.099200 | orchestrator | 2025-07-06 20:29:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:13.149192 | orchestrator | 2025-07-06 20:29:13 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:29:13.150605 | orchestrator | 2025-07-06 20:29:13 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:13.151857 | orchestrator | 2025-07-06 20:29:13 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:13.153430 | orchestrator | 2025-07-06 20:29:13 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:13.153459 | orchestrator | 2025-07-06 20:29:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:16.199906 | orchestrator | 2025-07-06 20:29:16 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:29:16.200736 | orchestrator | 2025-07-06 20:29:16 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:16.201931 | orchestrator | 2025-07-06 20:29:16 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:16.203198 | orchestrator | 2025-07-06 20:29:16 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:16.203213 | orchestrator | 2025-07-06 20:29:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:19.246124 | orchestrator | 2025-07-06 20:29:19 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:29:19.248147 | orchestrator | 2025-07-06 20:29:19 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:19.251748 | orchestrator | 2025-07-06 20:29:19 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:19.253740 | orchestrator | 2025-07-06 20:29:19 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:19.253799 | orchestrator | 2025-07-06 20:29:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:22.300838 | orchestrator | 2025-07-06 20:29:22 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:29:22.300984 | orchestrator | 2025-07-06 20:29:22 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:22.301011 | orchestrator | 2025-07-06 20:29:22 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:22.301597 | orchestrator | 2025-07-06 20:29:22 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:22.301635 | orchestrator | 2025-07-06 20:29:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:25.343004 | orchestrator | 2025-07-06 20:29:25 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state STARTED 2025-07-06 20:29:25.343155 | orchestrator | 2025-07-06 20:29:25 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:25.344126 | orchestrator | 2025-07-06 20:29:25 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:25.348399 | orchestrator | 2025-07-06 20:29:25 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:25.348414 | orchestrator | 2025-07-06 20:29:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:28.389093 | orchestrator | 2025-07-06 20:29:28 | INFO  | Task ec71205e-4aeb-4c8e-9948-6761051fac47 is in state SUCCESS 2025-07-06 20:29:28.390338 | orchestrator | 2025-07-06 20:29:28.390378 | orchestrator | 2025-07-06 20:29:28.390392 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:29:28.390404 | orchestrator | 2025-07-06 20:29:28.390415 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:29:28.390426 | orchestrator | Sunday 06 July 2025 20:27:16 +0000 (0:00:00.268) 0:00:00.268 *********** 2025-07-06 20:29:28.390437 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:29:28.390449 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:29:28.390460 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:29:28.390471 | orchestrator | 2025-07-06 20:29:28.390482 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:29:28.390493 | orchestrator | Sunday 06 July 2025 20:27:16 +0000 (0:00:00.277) 0:00:00.546 *********** 2025-07-06 20:29:28.390503 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-07-06 20:29:28.390570 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-07-06 20:29:28.390584 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-07-06 20:29:28.390595 | orchestrator | 2025-07-06 20:29:28.390606 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-07-06 20:29:28.390617 | orchestrator | 2025-07-06 20:29:28.390628 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-06 20:29:28.390789 | orchestrator | Sunday 06 July 2025 20:27:16 +0000 (0:00:00.396) 0:00:00.942 *********** 2025-07-06 20:29:28.390815 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:29:28.390854 | orchestrator | 2025-07-06 20:29:28.390892 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-07-06 20:29:28.390911 | orchestrator | Sunday 06 July 2025 20:27:17 +0000 (0:00:00.507) 0:00:01.450 *********** 2025-07-06 20:29:28.390934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.390958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.390996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.391033 | orchestrator | 2025-07-06 20:29:28.391049 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-07-06 20:29:28.391068 | orchestrator | Sunday 06 July 2025 20:27:18 +0000 (0:00:00.711) 0:00:02.162 *********** 2025-07-06 20:29:28.391085 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-07-06 20:29:28.391103 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-07-06 20:29:28.391120 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:29:28.391137 | orchestrator | 2025-07-06 20:29:28.391154 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-06 20:29:28.391173 | orchestrator | Sunday 06 July 2025 20:27:18 +0000 (0:00:00.807) 0:00:02.969 *********** 2025-07-06 20:29:28.391193 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:29:28.391211 | orchestrator | 2025-07-06 20:29:28.391255 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-07-06 20:29:28.391275 | orchestrator | Sunday 06 July 2025 20:27:19 +0000 (0:00:00.667) 0:00:03.636 *********** 2025-07-06 20:29:28.391348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.391363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.391375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.391387 | orchestrator | 2025-07-06 20:29:28.391399 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-07-06 20:29:28.391410 | orchestrator | Sunday 06 July 2025 20:27:20 +0000 (0:00:01.341) 0:00:04.978 *********** 2025-07-06 20:29:28.391421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:29:28.391450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:29:28.391462 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:29:28.391473 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:29:28.391512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:29:28.391525 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:29:28.391536 | orchestrator | 2025-07-06 20:29:28.391547 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-07-06 20:29:28.391557 | orchestrator | Sunday 06 July 2025 20:27:21 +0000 (0:00:00.368) 0:00:05.346 *********** 2025-07-06 20:29:28.391568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:29:28.391580 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:29:28.391590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:29:28.391601 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:29:28.391612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-06 20:29:28.391630 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:29:28.391641 | orchestrator | 2025-07-06 20:29:28.391652 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-07-06 20:29:28.391663 | orchestrator | Sunday 06 July 2025 20:27:22 +0000 (0:00:00.818) 0:00:06.164 *********** 2025-07-06 20:29:28.391679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.391691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.391711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.391723 | orchestrator | 2025-07-06 20:29:28.391734 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-07-06 20:29:28.391744 | orchestrator | Sunday 06 July 2025 20:27:23 +0000 (0:00:01.204) 0:00:07.369 *********** 2025-07-06 20:29:28.391756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.391767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.391785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.391797 | orchestrator | 2025-07-06 20:29:28.391808 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-07-06 20:29:28.391819 | orchestrator | Sunday 06 July 2025 20:27:24 +0000 (0:00:01.383) 0:00:08.753 *********** 2025-07-06 20:29:28.391830 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:29:28.391841 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:29:28.391852 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:29:28.391863 | orchestrator | 2025-07-06 20:29:28.391874 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-07-06 20:29:28.391890 | orchestrator | Sunday 06 July 2025 20:27:25 +0000 (0:00:00.477) 0:00:09.231 *********** 2025-07-06 20:29:28.391901 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-06 20:29:28.391912 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-06 20:29:28.391923 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-06 20:29:28.391934 | orchestrator | 2025-07-06 20:29:28.391944 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-07-06 20:29:28.391955 | orchestrator | Sunday 06 July 2025 20:27:26 +0000 (0:00:01.267) 0:00:10.498 *********** 2025-07-06 20:29:28.391966 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-06 20:29:28.391978 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-06 20:29:28.391989 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-06 20:29:28.392000 | orchestrator | 2025-07-06 20:29:28.392011 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-07-06 20:29:28.392022 | orchestrator | Sunday 06 July 2025 20:27:27 +0000 (0:00:01.259) 0:00:11.758 *********** 2025-07-06 20:29:28.392039 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:29:28.392050 | orchestrator | 2025-07-06 20:29:28.392061 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-07-06 20:29:28.392072 | orchestrator | Sunday 06 July 2025 20:27:28 +0000 (0:00:00.740) 0:00:12.498 *********** 2025-07-06 20:29:28.392083 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-07-06 20:29:28.392094 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-07-06 20:29:28.392105 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:29:28.392116 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:29:28.392127 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:29:28.392138 | orchestrator | 2025-07-06 20:29:28.392149 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-07-06 20:29:28.392160 | orchestrator | Sunday 06 July 2025 20:27:29 +0000 (0:00:00.675) 0:00:13.173 *********** 2025-07-06 20:29:28.392171 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:29:28.392181 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:29:28.392192 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:29:28.392203 | orchestrator | 2025-07-06 20:29:28.392219 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-07-06 20:29:28.392306 | orchestrator | Sunday 06 July 2025 20:27:29 +0000 (0:00:00.520) 0:00:13.693 *********** 2025-07-06 20:29:28.392327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090268, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5401955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090268, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5401955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1090268, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5401955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392392 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090251, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5341954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090251, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5341954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1090251, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5341954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090242, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5321953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090242, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5321953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1090242, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5321953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090257, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5371954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090257, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5371954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1090257, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5371954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090233, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5291953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090233, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5291953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1090233, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5291953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090246, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5331953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090246, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5331953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1090246, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5331953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090254, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5361953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090254, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5361953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1090254, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5361953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090230, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5281951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090230, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5281951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.392763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1090230, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5281951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090200, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.519195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090200, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.519195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1090200, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.519195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090235, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5301952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090235, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5301952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1090235, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5301952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090209, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5231953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090209, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5231953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1090209, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5231953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090253, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5351954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090253, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5351954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1090253, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5351954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090238, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5311954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090238, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5311954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1090238, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5311954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090261, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5381954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090261, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5381954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1090261, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5381954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090229, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5281951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090229, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5281951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1090229, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5281951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090249, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5341954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090249, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5341954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1090249, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5341954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090203, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5211952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090203, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5211952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1090203, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5211952, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090217, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5281951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090217, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5281951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1090217, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5281951, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090241, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5311954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090241, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5311954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1090241, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5311954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090381, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5741959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090381, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5741959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1090381, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5741959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.393999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090368, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5601957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090368, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5601957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1090368, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5601957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090281, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5411954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090281, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5411954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1090281, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5411954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090450, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.585196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090450, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.585196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1090450, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.585196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090285, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5411954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090285, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5411954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1090285, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5411954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090445, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5811958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090445, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5811958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1090445, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5811958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090463, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.6911972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090463, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.6911972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1090463, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.6911972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090428, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5761957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090428, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5761957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1090428, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5761957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090439, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.580196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090439, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.580196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1090439, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.580196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394421 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090289, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5421953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090289, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5421953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1090289, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5421953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090371, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5601957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090371, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5601957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1090371, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5601957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090469, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.692197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090469, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.692197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1090469, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.692197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090447, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5821958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090447, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5821958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1090447, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5821958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090299, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5511956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090299, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5511956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1090299, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5511956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090295, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5431955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090295, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5431955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1090295, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5431955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090318, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5531955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090318, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5531955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1090318, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5531955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090331, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5591955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090331, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5591955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1090331, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5591955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090376, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5611956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090376, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5611956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1090376, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5611956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090434, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5781958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090434, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5781958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1090434, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5781958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090378, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5621955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090378, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5621955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1090378, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.5621955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090472, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.695197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090472, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.695197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1090472, 'dev': 111, 'nlink': 1, 'atime': 1751760153.0, 'mtime': 1751760153.0, 'ctime': 1751830647.695197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-06 20:29:28.394972 | orchestrator | 2025-07-06 20:29:28.394983 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-07-06 20:29:28.394999 | orchestrator | Sunday 06 July 2025 20:28:06 +0000 (0:00:37.195) 0:00:50.889 *********** 2025-07-06 20:29:28.395009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.395019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.395033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-06 20:29:28.395044 | orchestrator | 2025-07-06 20:29:28.395054 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-07-06 20:29:28.395064 | orchestrator | Sunday 06 July 2025 20:28:08 +0000 (0:00:01.292) 0:00:52.181 *********** 2025-07-06 20:29:28.395074 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:29:28.395083 | orchestrator | 2025-07-06 20:29:28.395093 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-07-06 20:29:28.395103 | orchestrator | Sunday 06 July 2025 20:28:10 +0000 (0:00:02.228) 0:00:54.410 *********** 2025-07-06 20:29:28.395113 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:29:28.395122 | orchestrator | 2025-07-06 20:29:28.395132 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-06 20:29:28.395154 | orchestrator | Sunday 06 July 2025 20:28:12 +0000 (0:00:02.210) 0:00:56.621 *********** 2025-07-06 20:29:28.395164 | orchestrator | 2025-07-06 20:29:28.395174 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-06 20:29:28.395190 | orchestrator | Sunday 06 July 2025 20:28:12 +0000 (0:00:00.344) 0:00:56.966 *********** 2025-07-06 20:29:28.395201 | orchestrator | 2025-07-06 20:29:28.395211 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-06 20:29:28.395221 | orchestrator | Sunday 06 July 2025 20:28:12 +0000 (0:00:00.064) 0:00:57.030 *********** 2025-07-06 20:29:28.395250 | orchestrator | 2025-07-06 20:29:28.395261 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-07-06 20:29:28.395271 | orchestrator | Sunday 06 July 2025 20:28:13 +0000 (0:00:00.067) 0:00:57.097 *********** 2025-07-06 20:29:28.395281 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:29:28.395290 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:29:28.395300 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:29:28.395310 | orchestrator | 2025-07-06 20:29:28.395324 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-07-06 20:29:28.395334 | orchestrator | Sunday 06 July 2025 20:28:14 +0000 (0:00:01.691) 0:00:58.788 *********** 2025-07-06 20:29:28.395344 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:29:28.395354 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:29:28.395363 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-07-06 20:29:28.395373 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-07-06 20:29:28.395383 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-07-06 20:29:28.395393 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:29:28.395402 | orchestrator | 2025-07-06 20:29:28.395412 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-07-06 20:29:28.395422 | orchestrator | Sunday 06 July 2025 20:28:52 +0000 (0:00:38.230) 0:01:37.019 *********** 2025-07-06 20:29:28.395431 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:29:28.395441 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:29:28.395451 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:29:28.395460 | orchestrator | 2025-07-06 20:29:28.395470 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-07-06 20:29:28.395479 | orchestrator | Sunday 06 July 2025 20:29:20 +0000 (0:00:27.070) 0:02:04.090 *********** 2025-07-06 20:29:28.395489 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:29:28.395499 | orchestrator | 2025-07-06 20:29:28.395508 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-07-06 20:29:28.395518 | orchestrator | Sunday 06 July 2025 20:29:22 +0000 (0:00:02.227) 0:02:06.318 *********** 2025-07-06 20:29:28.395527 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:29:28.395537 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:29:28.395546 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:29:28.395556 | orchestrator | 2025-07-06 20:29:28.395566 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-07-06 20:29:28.395575 | orchestrator | Sunday 06 July 2025 20:29:22 +0000 (0:00:00.369) 0:02:06.687 *********** 2025-07-06 20:29:28.395585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-07-06 20:29:28.395599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-07-06 20:29:28.395609 | orchestrator | 2025-07-06 20:29:28.395619 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-07-06 20:29:28.395629 | orchestrator | Sunday 06 July 2025 20:29:25 +0000 (0:00:02.368) 0:02:09.056 *********** 2025-07-06 20:29:28.395639 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:29:28.395648 | orchestrator | 2025-07-06 20:29:28.395658 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:29:28.395673 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:29:28.395684 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:29:28.395694 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-06 20:29:28.395704 | orchestrator | 2025-07-06 20:29:28.395713 | orchestrator | 2025-07-06 20:29:28.395727 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:29:28.395737 | orchestrator | Sunday 06 July 2025 20:29:25 +0000 (0:00:00.255) 0:02:09.311 *********** 2025-07-06 20:29:28.395746 | orchestrator | =============================================================================== 2025-07-06 20:29:28.395756 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.23s 2025-07-06 20:29:28.395766 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.20s 2025-07-06 20:29:28.395775 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 27.07s 2025-07-06 20:29:28.395785 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.37s 2025-07-06 20:29:28.395795 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.23s 2025-07-06 20:29:28.395809 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.23s 2025-07-06 20:29:28.395819 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.21s 2025-07-06 20:29:28.395829 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.69s 2025-07-06 20:29:28.395838 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.38s 2025-07-06 20:29:28.395848 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.34s 2025-07-06 20:29:28.395857 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.29s 2025-07-06 20:29:28.395867 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2025-07-06 20:29:28.395876 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.26s 2025-07-06 20:29:28.395886 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.20s 2025-07-06 20:29:28.395895 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.82s 2025-07-06 20:29:28.395905 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.81s 2025-07-06 20:29:28.395914 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2025-07-06 20:29:28.395924 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.71s 2025-07-06 20:29:28.395933 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.68s 2025-07-06 20:29:28.395943 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.67s 2025-07-06 20:29:28.395953 | orchestrator | 2025-07-06 20:29:28 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:28.395963 | orchestrator | 2025-07-06 20:29:28 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:28.396086 | orchestrator | 2025-07-06 20:29:28 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:28.396102 | orchestrator | 2025-07-06 20:29:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:31.444699 | orchestrator | 2025-07-06 20:29:31 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:31.446452 | orchestrator | 2025-07-06 20:29:31 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:31.448150 | orchestrator | 2025-07-06 20:29:31 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:31.448187 | orchestrator | 2025-07-06 20:29:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:34.493801 | orchestrator | 2025-07-06 20:29:34 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:34.495209 | orchestrator | 2025-07-06 20:29:34 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:34.497510 | orchestrator | 2025-07-06 20:29:34 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:34.497620 | orchestrator | 2025-07-06 20:29:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:37.543125 | orchestrator | 2025-07-06 20:29:37 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:37.543580 | orchestrator | 2025-07-06 20:29:37 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:37.544492 | orchestrator | 2025-07-06 20:29:37 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:37.544547 | orchestrator | 2025-07-06 20:29:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:40.583002 | orchestrator | 2025-07-06 20:29:40 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:40.584438 | orchestrator | 2025-07-06 20:29:40 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:40.586103 | orchestrator | 2025-07-06 20:29:40 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:40.586123 | orchestrator | 2025-07-06 20:29:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:43.632323 | orchestrator | 2025-07-06 20:29:43 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:43.634143 | orchestrator | 2025-07-06 20:29:43 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:43.635754 | orchestrator | 2025-07-06 20:29:43 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:43.635785 | orchestrator | 2025-07-06 20:29:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:46.678379 | orchestrator | 2025-07-06 20:29:46 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:46.678481 | orchestrator | 2025-07-06 20:29:46 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:46.678711 | orchestrator | 2025-07-06 20:29:46 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:46.678728 | orchestrator | 2025-07-06 20:29:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:49.721323 | orchestrator | 2025-07-06 20:29:49 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state STARTED 2025-07-06 20:29:49.724822 | orchestrator | 2025-07-06 20:29:49 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:49.726932 | orchestrator | 2025-07-06 20:29:49 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:49.727870 | orchestrator | 2025-07-06 20:29:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:52.765610 | orchestrator | 2025-07-06 20:29:52 | INFO  | Task d3c6ff6b-203a-4a53-9331-847717b687cb is in state SUCCESS 2025-07-06 20:29:52.767427 | orchestrator | 2025-07-06 20:29:52 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:52.771591 | orchestrator | 2025-07-06 20:29:52 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state STARTED 2025-07-06 20:29:52.772109 | orchestrator | 2025-07-06 20:29:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:55.809499 | orchestrator | 2025-07-06 20:29:55 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:55.809959 | orchestrator | 2025-07-06 20:29:55 | INFO  | Task c1195c61-ab6a-415f-a358-134b1fbe2495 is in state SUCCESS 2025-07-06 20:29:55.809990 | orchestrator | 2025-07-06 20:29:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:29:58.859212 | orchestrator | 2025-07-06 20:29:58 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:29:58.859374 | orchestrator | 2025-07-06 20:29:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:01.904515 | orchestrator | 2025-07-06 20:30:01 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:01.904640 | orchestrator | 2025-07-06 20:30:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:04.952014 | orchestrator | 2025-07-06 20:30:04 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:04.952129 | orchestrator | 2025-07-06 20:30:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:07.990172 | orchestrator | 2025-07-06 20:30:07 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:07.990990 | orchestrator | 2025-07-06 20:30:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:11.033462 | orchestrator | 2025-07-06 20:30:11 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:11.033549 | orchestrator | 2025-07-06 20:30:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:14.070569 | orchestrator | 2025-07-06 20:30:14 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:14.070680 | orchestrator | 2025-07-06 20:30:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:17.116691 | orchestrator | 2025-07-06 20:30:17 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:17.116807 | orchestrator | 2025-07-06 20:30:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:20.160311 | orchestrator | 2025-07-06 20:30:20 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:20.160420 | orchestrator | 2025-07-06 20:30:20 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:23.214538 | orchestrator | 2025-07-06 20:30:23 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:23.214632 | orchestrator | 2025-07-06 20:30:23 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:26.249426 | orchestrator | 2025-07-06 20:30:26 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:26.249535 | orchestrator | 2025-07-06 20:30:26 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:29.294432 | orchestrator | 2025-07-06 20:30:29 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:29.294548 | orchestrator | 2025-07-06 20:30:29 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:32.336986 | orchestrator | 2025-07-06 20:30:32 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:32.337118 | orchestrator | 2025-07-06 20:30:32 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:35.382545 | orchestrator | 2025-07-06 20:30:35 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:35.382624 | orchestrator | 2025-07-06 20:30:35 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:38.425633 | orchestrator | 2025-07-06 20:30:38 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:38.427475 | orchestrator | 2025-07-06 20:30:38 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:41.466171 | orchestrator | 2025-07-06 20:30:41 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:41.466295 | orchestrator | 2025-07-06 20:30:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:44.511196 | orchestrator | 2025-07-06 20:30:44 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:44.511389 | orchestrator | 2025-07-06 20:30:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:47.548146 | orchestrator | 2025-07-06 20:30:47 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:47.548259 | orchestrator | 2025-07-06 20:30:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:50.584016 | orchestrator | 2025-07-06 20:30:50 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:50.584116 | orchestrator | 2025-07-06 20:30:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:53.630310 | orchestrator | 2025-07-06 20:30:53 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:53.630403 | orchestrator | 2025-07-06 20:30:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:56.665100 | orchestrator | 2025-07-06 20:30:56 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:56.665177 | orchestrator | 2025-07-06 20:30:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:30:59.703221 | orchestrator | 2025-07-06 20:30:59 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:30:59.703369 | orchestrator | 2025-07-06 20:30:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:02.751673 | orchestrator | 2025-07-06 20:31:02 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:02.751778 | orchestrator | 2025-07-06 20:31:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:05.801072 | orchestrator | 2025-07-06 20:31:05 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:05.801196 | orchestrator | 2025-07-06 20:31:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:08.836014 | orchestrator | 2025-07-06 20:31:08 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:08.836098 | orchestrator | 2025-07-06 20:31:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:11.874686 | orchestrator | 2025-07-06 20:31:11 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:11.875720 | orchestrator | 2025-07-06 20:31:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:14.937976 | orchestrator | 2025-07-06 20:31:14 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:14.938111 | orchestrator | 2025-07-06 20:31:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:17.979641 | orchestrator | 2025-07-06 20:31:17 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:17.979743 | orchestrator | 2025-07-06 20:31:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:21.032059 | orchestrator | 2025-07-06 20:31:21 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:21.032166 | orchestrator | 2025-07-06 20:31:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:24.070827 | orchestrator | 2025-07-06 20:31:24 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:24.070928 | orchestrator | 2025-07-06 20:31:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:27.111845 | orchestrator | 2025-07-06 20:31:27 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:27.111948 | orchestrator | 2025-07-06 20:31:27 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:30.150743 | orchestrator | 2025-07-06 20:31:30 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:30.150890 | orchestrator | 2025-07-06 20:31:30 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:33.197842 | orchestrator | 2025-07-06 20:31:33 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:33.197973 | orchestrator | 2025-07-06 20:31:33 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:36.241699 | orchestrator | 2025-07-06 20:31:36 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:36.243022 | orchestrator | 2025-07-06 20:31:36 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:39.290354 | orchestrator | 2025-07-06 20:31:39 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:39.290461 | orchestrator | 2025-07-06 20:31:39 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:42.337189 | orchestrator | 2025-07-06 20:31:42 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:42.337308 | orchestrator | 2025-07-06 20:31:42 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:45.388621 | orchestrator | 2025-07-06 20:31:45 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:45.389484 | orchestrator | 2025-07-06 20:31:45 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:48.432561 | orchestrator | 2025-07-06 20:31:48 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:48.432662 | orchestrator | 2025-07-06 20:31:48 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:51.476131 | orchestrator | 2025-07-06 20:31:51 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:51.476220 | orchestrator | 2025-07-06 20:31:51 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:54.519976 | orchestrator | 2025-07-06 20:31:54 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:54.520108 | orchestrator | 2025-07-06 20:31:54 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:31:57.565443 | orchestrator | 2025-07-06 20:31:57 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:31:57.565550 | orchestrator | 2025-07-06 20:31:57 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:00.603748 | orchestrator | 2025-07-06 20:32:00 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:00.603846 | orchestrator | 2025-07-06 20:32:00 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:03.651808 | orchestrator | 2025-07-06 20:32:03 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:03.651917 | orchestrator | 2025-07-06 20:32:03 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:06.697625 | orchestrator | 2025-07-06 20:32:06 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:06.697719 | orchestrator | 2025-07-06 20:32:06 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:09.747696 | orchestrator | 2025-07-06 20:32:09 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:09.747818 | orchestrator | 2025-07-06 20:32:09 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:12.791360 | orchestrator | 2025-07-06 20:32:12 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:12.791479 | orchestrator | 2025-07-06 20:32:12 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:15.840527 | orchestrator | 2025-07-06 20:32:15 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:15.840647 | orchestrator | 2025-07-06 20:32:15 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:18.882788 | orchestrator | 2025-07-06 20:32:18 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:18.882912 | orchestrator | 2025-07-06 20:32:18 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:21.934887 | orchestrator | 2025-07-06 20:32:21 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:21.934995 | orchestrator | 2025-07-06 20:32:21 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:24.979289 | orchestrator | 2025-07-06 20:32:24 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:24.979396 | orchestrator | 2025-07-06 20:32:24 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:28.027577 | orchestrator | 2025-07-06 20:32:28 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:28.027681 | orchestrator | 2025-07-06 20:32:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:31.068622 | orchestrator | 2025-07-06 20:32:31 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:31.068721 | orchestrator | 2025-07-06 20:32:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:34.108635 | orchestrator | 2025-07-06 20:32:34 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:34.108750 | orchestrator | 2025-07-06 20:32:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:37.148808 | orchestrator | 2025-07-06 20:32:37 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:37.148914 | orchestrator | 2025-07-06 20:32:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:40.193406 | orchestrator | 2025-07-06 20:32:40 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:40.193576 | orchestrator | 2025-07-06 20:32:40 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:43.230160 | orchestrator | 2025-07-06 20:32:43 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:43.230254 | orchestrator | 2025-07-06 20:32:43 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:46.274818 | orchestrator | 2025-07-06 20:32:46 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:46.274929 | orchestrator | 2025-07-06 20:32:46 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:49.306164 | orchestrator | 2025-07-06 20:32:49 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:49.306272 | orchestrator | 2025-07-06 20:32:49 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:52.343465 | orchestrator | 2025-07-06 20:32:52 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:52.343585 | orchestrator | 2025-07-06 20:32:52 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:55.391045 | orchestrator | 2025-07-06 20:32:55 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:55.391467 | orchestrator | 2025-07-06 20:32:55 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:32:58.436079 | orchestrator | 2025-07-06 20:32:58 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:32:58.436199 | orchestrator | 2025-07-06 20:32:58 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:01.481867 | orchestrator | 2025-07-06 20:33:01 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:01.481995 | orchestrator | 2025-07-06 20:33:01 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:04.523283 | orchestrator | 2025-07-06 20:33:04 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:04.523506 | orchestrator | 2025-07-06 20:33:04 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:07.566293 | orchestrator | 2025-07-06 20:33:07 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:07.566438 | orchestrator | 2025-07-06 20:33:07 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:10.605083 | orchestrator | 2025-07-06 20:33:10 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:10.605209 | orchestrator | 2025-07-06 20:33:10 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:13.648941 | orchestrator | 2025-07-06 20:33:13 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:13.649776 | orchestrator | 2025-07-06 20:33:13 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:16.680134 | orchestrator | 2025-07-06 20:33:16 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:16.680495 | orchestrator | 2025-07-06 20:33:16 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:19.722678 | orchestrator | 2025-07-06 20:33:19 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:19.722781 | orchestrator | 2025-07-06 20:33:19 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:22.769861 | orchestrator | 2025-07-06 20:33:22 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:22.770000 | orchestrator | 2025-07-06 20:33:22 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:25.810906 | orchestrator | 2025-07-06 20:33:25 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:25.811015 | orchestrator | 2025-07-06 20:33:25 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:28.858492 | orchestrator | 2025-07-06 20:33:28 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:28.858599 | orchestrator | 2025-07-06 20:33:28 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:31.897882 | orchestrator | 2025-07-06 20:33:31 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:31.897987 | orchestrator | 2025-07-06 20:33:31 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:34.942163 | orchestrator | 2025-07-06 20:33:34 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:34.942268 | orchestrator | 2025-07-06 20:33:34 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:37.987003 | orchestrator | 2025-07-06 20:33:37 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:37.989723 | orchestrator | 2025-07-06 20:33:37 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:41.034893 | orchestrator | 2025-07-06 20:33:41 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:41.035001 | orchestrator | 2025-07-06 20:33:41 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:44.052211 | orchestrator | 2025-07-06 20:33:44 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:44.052340 | orchestrator | 2025-07-06 20:33:44 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:47.076975 | orchestrator | 2025-07-06 20:33:47 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:47.077111 | orchestrator | 2025-07-06 20:33:47 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:50.114281 | orchestrator | 2025-07-06 20:33:50 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:50.114455 | orchestrator | 2025-07-06 20:33:50 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:53.151819 | orchestrator | 2025-07-06 20:33:53 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:53.151937 | orchestrator | 2025-07-06 20:33:53 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:56.200154 | orchestrator | 2025-07-06 20:33:56 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:56.200261 | orchestrator | 2025-07-06 20:33:56 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:33:59.245536 | orchestrator | 2025-07-06 20:33:59 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:33:59.245635 | orchestrator | 2025-07-06 20:33:59 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:34:02.294254 | orchestrator | 2025-07-06 20:34:02 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:34:02.294335 | orchestrator | 2025-07-06 20:34:02 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:34:05.338228 | orchestrator | 2025-07-06 20:34:05 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:34:05.338475 | orchestrator | 2025-07-06 20:34:05 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:34:08.380006 | orchestrator | 2025-07-06 20:34:08 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:34:08.380089 | orchestrator | 2025-07-06 20:34:08 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:34:11.423309 | orchestrator | 2025-07-06 20:34:11 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:34:11.423445 | orchestrator | 2025-07-06 20:34:11 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:34:14.462212 | orchestrator | 2025-07-06 20:34:14 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:34:14.462339 | orchestrator | 2025-07-06 20:34:14 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:34:17.515988 | orchestrator | 2025-07-06 20:34:17 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state STARTED 2025-07-06 20:34:17.516093 | orchestrator | 2025-07-06 20:34:17 | INFO  | Wait 1 second(s) until the next check 2025-07-06 20:34:20.557583 | orchestrator | 2025-07-06 20:34:20 | INFO  | Task cde5bbe8-5215-4bea-839b-11b055a8b83e is in state SUCCESS 2025-07-06 20:34:20.558441 | orchestrator | 2025-07-06 20:34:20.558479 | orchestrator | 2025-07-06 20:34:20.558492 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:34:20.558504 | orchestrator | 2025-07-06 20:34:20.558516 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:34:20.558527 | orchestrator | Sunday 06 July 2025 20:28:57 +0000 (0:00:00.270) 0:00:00.270 *********** 2025-07-06 20:34:20.558538 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.558551 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:34:20.558619 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:34:20.558683 | orchestrator | 2025-07-06 20:34:20.558697 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:34:20.558709 | orchestrator | Sunday 06 July 2025 20:28:57 +0000 (0:00:00.319) 0:00:00.590 *********** 2025-07-06 20:34:20.558720 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-07-06 20:34:20.558815 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-07-06 20:34:20.558854 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-07-06 20:34:20.558922 | orchestrator | 2025-07-06 20:34:20.558968 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-07-06 20:34:20.558979 | orchestrator | 2025-07-06 20:34:20.558990 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-06 20:34:20.559001 | orchestrator | Sunday 06 July 2025 20:28:57 +0000 (0:00:00.411) 0:00:01.001 *********** 2025-07-06 20:34:20.559012 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:34:20.559026 | orchestrator | 2025-07-06 20:34:20.559065 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-07-06 20:34:20.559078 | orchestrator | Sunday 06 July 2025 20:28:58 +0000 (0:00:00.554) 0:00:01.555 *********** 2025-07-06 20:34:20.559091 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-07-06 20:34:20.559103 | orchestrator | 2025-07-06 20:34:20.559116 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-07-06 20:34:20.559128 | orchestrator | Sunday 06 July 2025 20:29:01 +0000 (0:00:03.383) 0:00:04.939 *********** 2025-07-06 20:34:20.559141 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-07-06 20:34:20.559153 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-07-06 20:34:20.559166 | orchestrator | 2025-07-06 20:34:20.559179 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-07-06 20:34:20.559191 | orchestrator | Sunday 06 July 2025 20:29:08 +0000 (0:00:06.366) 0:00:11.306 *********** 2025-07-06 20:34:20.559204 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:34:20.559215 | orchestrator | 2025-07-06 20:34:20.559226 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-07-06 20:34:20.559236 | orchestrator | Sunday 06 July 2025 20:29:11 +0000 (0:00:03.175) 0:00:14.481 *********** 2025-07-06 20:34:20.559247 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:34:20.559270 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-06 20:34:20.559281 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-06 20:34:20.559292 | orchestrator | 2025-07-06 20:34:20.559303 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-07-06 20:34:20.559314 | orchestrator | Sunday 06 July 2025 20:29:19 +0000 (0:00:08.025) 0:00:22.507 *********** 2025-07-06 20:34:20.559325 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:34:20.559336 | orchestrator | 2025-07-06 20:34:20.559387 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-07-06 20:34:20.559399 | orchestrator | Sunday 06 July 2025 20:29:22 +0000 (0:00:03.296) 0:00:25.804 *********** 2025-07-06 20:34:20.559432 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-06 20:34:20.559443 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-06 20:34:20.559453 | orchestrator | 2025-07-06 20:34:20.559464 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-07-06 20:34:20.559475 | orchestrator | Sunday 06 July 2025 20:29:30 +0000 (0:00:07.258) 0:00:33.062 *********** 2025-07-06 20:34:20.559486 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-07-06 20:34:20.559496 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-07-06 20:34:20.559507 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-07-06 20:34:20.559518 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-07-06 20:34:20.559528 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-07-06 20:34:20.559539 | orchestrator | 2025-07-06 20:34:20.559565 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-06 20:34:20.559585 | orchestrator | Sunday 06 July 2025 20:29:45 +0000 (0:00:15.620) 0:00:48.683 *********** 2025-07-06 20:34:20.559596 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:34:20.559607 | orchestrator | 2025-07-06 20:34:20.559618 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-07-06 20:34:20.559629 | orchestrator | Sunday 06 July 2025 20:29:46 +0000 (0:00:00.530) 0:00:49.214 *********** 2025-07-06 20:34:20.559656 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request."} 2025-07-06 20:34:20.559671 | orchestrator | 2025-07-06 20:34:20.559682 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:34:20.559694 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-06 20:34:20.559706 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:34:20.559718 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:34:20.559729 | orchestrator | 2025-07-06 20:34:20.559739 | orchestrator | 2025-07-06 20:34:20.559750 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:34:20.559761 | orchestrator | Sunday 06 July 2025 20:29:49 +0000 (0:00:03.378) 0:00:52.592 *********** 2025-07-06 20:34:20.559772 | orchestrator | =============================================================================== 2025-07-06 20:34:20.559863 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.62s 2025-07-06 20:34:20.559874 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.03s 2025-07-06 20:34:20.559885 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.26s 2025-07-06 20:34:20.559896 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.37s 2025-07-06 20:34:20.559906 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.38s 2025-07-06 20:34:20.559917 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.38s 2025-07-06 20:34:20.559927 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.30s 2025-07-06 20:34:20.560205 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.18s 2025-07-06 20:34:20.560217 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.55s 2025-07-06 20:34:20.560228 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.53s 2025-07-06 20:34:20.560239 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-07-06 20:34:20.560250 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-07-06 20:34:20.560260 | orchestrator | 2025-07-06 20:34:20.560271 | orchestrator | 2025-07-06 20:34:20.560282 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:34:20.560292 | orchestrator | 2025-07-06 20:34:20.560316 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:34:20.560327 | orchestrator | Sunday 06 July 2025 20:28:30 +0000 (0:00:00.176) 0:00:00.176 *********** 2025-07-06 20:34:20.560338 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.560349 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:34:20.560360 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:34:20.560371 | orchestrator | 2025-07-06 20:34:20.560381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:34:20.560428 | orchestrator | Sunday 06 July 2025 20:28:30 +0000 (0:00:00.298) 0:00:00.475 *********** 2025-07-06 20:34:20.560440 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-07-06 20:34:20.560451 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-07-06 20:34:20.560462 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-07-06 20:34:20.560472 | orchestrator | 2025-07-06 20:34:20.560483 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-07-06 20:34:20.560494 | orchestrator | 2025-07-06 20:34:20.560504 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-07-06 20:34:20.560515 | orchestrator | Sunday 06 July 2025 20:28:31 +0000 (0:00:00.614) 0:00:01.089 *********** 2025-07-06 20:34:20.560525 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.560536 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:34:20.560547 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:34:20.560557 | orchestrator | 2025-07-06 20:34:20.560568 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:34:20.560579 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:34:20.560590 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:34:20.560608 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:34:20.560619 | orchestrator | 2025-07-06 20:34:20.560629 | orchestrator | 2025-07-06 20:34:20.560640 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:34:20.560651 | orchestrator | Sunday 06 July 2025 20:29:54 +0000 (0:01:22.726) 0:01:23.816 *********** 2025-07-06 20:34:20.560661 | orchestrator | =============================================================================== 2025-07-06 20:34:20.560672 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 82.73s 2025-07-06 20:34:20.560682 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-07-06 20:34:20.560693 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-07-06 20:34:20.560703 | orchestrator | 2025-07-06 20:34:20.560714 | orchestrator | 2025-07-06 20:34:20.560724 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:34:20.560735 | orchestrator | 2025-07-06 20:34:20.560746 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-07-06 20:34:20.560767 | orchestrator | Sunday 06 July 2025 20:26:01 +0000 (0:00:00.288) 0:00:00.288 *********** 2025-07-06 20:34:20.560778 | orchestrator | changed: [testbed-manager] 2025-07-06 20:34:20.560789 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.560800 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:34:20.560811 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:34:20.560822 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.560832 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.560843 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.560854 | orchestrator | 2025-07-06 20:34:20.560865 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:34:20.560875 | orchestrator | Sunday 06 July 2025 20:26:02 +0000 (0:00:00.842) 0:00:01.131 *********** 2025-07-06 20:34:20.560886 | orchestrator | changed: [testbed-manager] 2025-07-06 20:34:20.560897 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.560907 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:34:20.560918 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:34:20.560929 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.560939 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.560950 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.560961 | orchestrator | 2025-07-06 20:34:20.560971 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:34:20.560990 | orchestrator | Sunday 06 July 2025 20:26:02 +0000 (0:00:00.699) 0:00:01.830 *********** 2025-07-06 20:34:20.561001 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-07-06 20:34:20.561011 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-07-06 20:34:20.561022 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-07-06 20:34:20.561033 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-07-06 20:34:20.561043 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-07-06 20:34:20.561054 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-07-06 20:34:20.561064 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-07-06 20:34:20.561075 | orchestrator | 2025-07-06 20:34:20.561086 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-07-06 20:34:20.561096 | orchestrator | 2025-07-06 20:34:20.561107 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-06 20:34:20.561118 | orchestrator | Sunday 06 July 2025 20:26:03 +0000 (0:00:00.913) 0:00:02.744 *********** 2025-07-06 20:34:20.561128 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:34:20.561139 | orchestrator | 2025-07-06 20:34:20.561150 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-07-06 20:34:20.561160 | orchestrator | Sunday 06 July 2025 20:26:04 +0000 (0:00:00.763) 0:00:03.508 *********** 2025-07-06 20:34:20.561171 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-07-06 20:34:20.561182 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-07-06 20:34:20.561192 | orchestrator | 2025-07-06 20:34:20.561203 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-07-06 20:34:20.561214 | orchestrator | Sunday 06 July 2025 20:26:08 +0000 (0:00:04.033) 0:00:07.541 *********** 2025-07-06 20:34:20.561224 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:34:20.561235 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-06 20:34:20.561245 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.561256 | orchestrator | 2025-07-06 20:34:20.561267 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-06 20:34:20.561277 | orchestrator | Sunday 06 July 2025 20:26:12 +0000 (0:00:03.978) 0:00:11.520 *********** 2025-07-06 20:34:20.561288 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.561299 | orchestrator | 2025-07-06 20:34:20.561309 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-07-06 20:34:20.561320 | orchestrator | Sunday 06 July 2025 20:26:13 +0000 (0:00:00.626) 0:00:12.146 *********** 2025-07-06 20:34:20.561331 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.561341 | orchestrator | 2025-07-06 20:34:20.561352 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-07-06 20:34:20.561363 | orchestrator | Sunday 06 July 2025 20:26:14 +0000 (0:00:01.279) 0:00:13.426 *********** 2025-07-06 20:34:20.561373 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.561384 | orchestrator | 2025-07-06 20:34:20.561394 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-06 20:34:20.561421 | orchestrator | Sunday 06 July 2025 20:26:17 +0000 (0:00:02.901) 0:00:16.328 *********** 2025-07-06 20:34:20.561432 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.561443 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.561454 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.561464 | orchestrator | 2025-07-06 20:34:20.561475 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-06 20:34:20.561486 | orchestrator | Sunday 06 July 2025 20:26:17 +0000 (0:00:00.300) 0:00:16.628 *********** 2025-07-06 20:34:20.561502 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.561513 | orchestrator | 2025-07-06 20:34:20.561523 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-07-06 20:34:20.561534 | orchestrator | Sunday 06 July 2025 20:26:47 +0000 (0:00:30.028) 0:00:46.657 *********** 2025-07-06 20:34:20.561551 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.561562 | orchestrator | 2025-07-06 20:34:20.561573 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-06 20:34:20.561584 | orchestrator | Sunday 06 July 2025 20:27:01 +0000 (0:00:13.839) 0:01:00.497 *********** 2025-07-06 20:34:20.561594 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.561605 | orchestrator | 2025-07-06 20:34:20.561616 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-06 20:34:20.561627 | orchestrator | Sunday 06 July 2025 20:27:13 +0000 (0:00:11.705) 0:01:12.202 *********** 2025-07-06 20:34:20.561637 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.561648 | orchestrator | 2025-07-06 20:34:20.561658 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-07-06 20:34:20.561669 | orchestrator | Sunday 06 July 2025 20:27:14 +0000 (0:00:01.088) 0:01:13.291 *********** 2025-07-06 20:34:20.561680 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.561690 | orchestrator | 2025-07-06 20:34:20.561708 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-06 20:34:20.561719 | orchestrator | Sunday 06 July 2025 20:27:14 +0000 (0:00:00.452) 0:01:13.744 *********** 2025-07-06 20:34:20.561730 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:34:20.561741 | orchestrator | 2025-07-06 20:34:20.561752 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-06 20:34:20.561763 | orchestrator | Sunday 06 July 2025 20:27:15 +0000 (0:00:00.528) 0:01:14.272 *********** 2025-07-06 20:34:20.561774 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.561784 | orchestrator | 2025-07-06 20:34:20.561796 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-06 20:34:20.561807 | orchestrator | Sunday 06 July 2025 20:27:32 +0000 (0:00:17.336) 0:01:31.608 *********** 2025-07-06 20:34:20.561817 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.561828 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.561839 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.561850 | orchestrator | 2025-07-06 20:34:20.561860 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-07-06 20:34:20.561871 | orchestrator | 2025-07-06 20:34:20.561882 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-06 20:34:20.561893 | orchestrator | Sunday 06 July 2025 20:27:33 +0000 (0:00:00.343) 0:01:31.952 *********** 2025-07-06 20:34:20.561904 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:34:20.561915 | orchestrator | 2025-07-06 20:34:20.561925 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-07-06 20:34:20.561936 | orchestrator | Sunday 06 July 2025 20:27:33 +0000 (0:00:00.611) 0:01:32.563 *********** 2025-07-06 20:34:20.561947 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.561958 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.561968 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.561979 | orchestrator | 2025-07-06 20:34:20.561990 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-07-06 20:34:20.562001 | orchestrator | Sunday 06 July 2025 20:27:35 +0000 (0:00:02.016) 0:01:34.579 *********** 2025-07-06 20:34:20.562011 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562067 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562078 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.562089 | orchestrator | 2025-07-06 20:34:20.562100 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-06 20:34:20.562110 | orchestrator | Sunday 06 July 2025 20:27:37 +0000 (0:00:02.126) 0:01:36.705 *********** 2025-07-06 20:34:20.562121 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.562132 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562142 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562153 | orchestrator | 2025-07-06 20:34:20.562170 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-06 20:34:20.562181 | orchestrator | Sunday 06 July 2025 20:27:38 +0000 (0:00:00.376) 0:01:37.082 *********** 2025-07-06 20:34:20.562192 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-06 20:34:20.562203 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562213 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-06 20:34:20.562224 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562234 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-06 20:34:20.562245 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-07-06 20:34:20.562256 | orchestrator | 2025-07-06 20:34:20.562267 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-06 20:34:20.562277 | orchestrator | Sunday 06 July 2025 20:27:47 +0000 (0:00:08.825) 0:01:45.907 *********** 2025-07-06 20:34:20.562288 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.562299 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562309 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562320 | orchestrator | 2025-07-06 20:34:20.562331 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-06 20:34:20.562341 | orchestrator | Sunday 06 July 2025 20:27:47 +0000 (0:00:00.352) 0:01:46.260 *********** 2025-07-06 20:34:20.562352 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-06 20:34:20.562363 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.562373 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-06 20:34:20.562383 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562394 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-06 20:34:20.562435 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562447 | orchestrator | 2025-07-06 20:34:20.562458 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-06 20:34:20.562469 | orchestrator | Sunday 06 July 2025 20:27:48 +0000 (0:00:00.688) 0:01:46.948 *********** 2025-07-06 20:34:20.562479 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.562495 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562506 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562517 | orchestrator | 2025-07-06 20:34:20.562527 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-07-06 20:34:20.562538 | orchestrator | Sunday 06 July 2025 20:27:48 +0000 (0:00:00.743) 0:01:47.692 *********** 2025-07-06 20:34:20.562548 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562559 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562569 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.562580 | orchestrator | 2025-07-06 20:34:20.562590 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-07-06 20:34:20.562601 | orchestrator | Sunday 06 July 2025 20:27:49 +0000 (0:00:01.025) 0:01:48.717 *********** 2025-07-06 20:34:20.562612 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562622 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562633 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.562643 | orchestrator | 2025-07-06 20:34:20.562654 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-07-06 20:34:20.562665 | orchestrator | Sunday 06 July 2025 20:27:52 +0000 (0:00:02.609) 0:01:51.327 *********** 2025-07-06 20:34:20.562682 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562694 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562704 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.562715 | orchestrator | 2025-07-06 20:34:20.562726 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-06 20:34:20.562737 | orchestrator | Sunday 06 July 2025 20:28:13 +0000 (0:00:20.845) 0:02:12.172 *********** 2025-07-06 20:34:20.562748 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562758 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562769 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.562788 | orchestrator | 2025-07-06 20:34:20.562799 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-06 20:34:20.562810 | orchestrator | Sunday 06 July 2025 20:28:24 +0000 (0:00:11.578) 0:02:23.751 *********** 2025-07-06 20:34:20.562820 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.562831 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562841 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562852 | orchestrator | 2025-07-06 20:34:20.562863 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-07-06 20:34:20.562874 | orchestrator | Sunday 06 July 2025 20:28:25 +0000 (0:00:00.846) 0:02:24.597 *********** 2025-07-06 20:34:20.562884 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562895 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562905 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.562916 | orchestrator | 2025-07-06 20:34:20.562927 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-07-06 20:34:20.562938 | orchestrator | Sunday 06 July 2025 20:28:37 +0000 (0:00:11.409) 0:02:36.006 *********** 2025-07-06 20:34:20.562948 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.562959 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.562969 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.562980 | orchestrator | 2025-07-06 20:34:20.562991 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-06 20:34:20.563001 | orchestrator | Sunday 06 July 2025 20:28:38 +0000 (0:00:01.434) 0:02:37.440 *********** 2025-07-06 20:34:20.563012 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.563023 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.563033 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.563044 | orchestrator | 2025-07-06 20:34:20.563054 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-07-06 20:34:20.563065 | orchestrator | 2025-07-06 20:34:20.563076 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-06 20:34:20.563087 | orchestrator | Sunday 06 July 2025 20:28:38 +0000 (0:00:00.335) 0:02:37.776 *********** 2025-07-06 20:34:20.563097 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:34:20.563108 | orchestrator | 2025-07-06 20:34:20.563119 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-07-06 20:34:20.563130 | orchestrator | Sunday 06 July 2025 20:28:39 +0000 (0:00:00.546) 0:02:38.323 *********** 2025-07-06 20:34:20.563140 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-07-06 20:34:20.563151 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-07-06 20:34:20.563162 | orchestrator | 2025-07-06 20:34:20.563173 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-07-06 20:34:20.563183 | orchestrator | Sunday 06 July 2025 20:28:42 +0000 (0:00:03.348) 0:02:41.671 *********** 2025-07-06 20:34:20.563194 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-07-06 20:34:20.563208 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-07-06 20:34:20.563226 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-07-06 20:34:20.563244 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-07-06 20:34:20.563263 | orchestrator | 2025-07-06 20:34:20.563280 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-07-06 20:34:20.563299 | orchestrator | Sunday 06 July 2025 20:28:49 +0000 (0:00:06.704) 0:02:48.376 *********** 2025-07-06 20:34:20.563316 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-06 20:34:20.563332 | orchestrator | 2025-07-06 20:34:20.563343 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-07-06 20:34:20.563354 | orchestrator | Sunday 06 July 2025 20:28:52 +0000 (0:00:03.243) 0:02:51.620 *********** 2025-07-06 20:34:20.563372 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-06 20:34:20.563383 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-07-06 20:34:20.563394 | orchestrator | 2025-07-06 20:34:20.563429 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-07-06 20:34:20.563441 | orchestrator | Sunday 06 July 2025 20:28:56 +0000 (0:00:03.682) 0:02:55.302 *********** 2025-07-06 20:34:20.563451 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-06 20:34:20.563462 | orchestrator | 2025-07-06 20:34:20.563473 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-07-06 20:34:20.563488 | orchestrator | Sunday 06 July 2025 20:28:59 +0000 (0:00:03.229) 0:02:58.532 *********** 2025-07-06 20:34:20.563505 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-07-06 20:34:20.563523 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-07-06 20:34:20.563542 | orchestrator | 2025-07-06 20:34:20.563555 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-06 20:34:20.563566 | orchestrator | Sunday 06 July 2025 20:29:07 +0000 (0:00:07.445) 0:03:05.977 *********** 2025-07-06 20:34:20.563604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.563620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.563638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.563674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.563688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.563699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.563710 | orchestrator | 2025-07-06 20:34:20.563721 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-07-06 20:34:20.563732 | orchestrator | Sunday 06 July 2025 20:29:08 +0000 (0:00:01.273) 0:03:07.251 *********** 2025-07-06 20:34:20.563744 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.563763 | orchestrator | 2025-07-06 20:34:20.563782 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-07-06 20:34:20.563796 | orchestrator | Sunday 06 July 2025 20:29:08 +0000 (0:00:00.135) 0:03:07.387 *********** 2025-07-06 20:34:20.563807 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.563817 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.563828 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.563839 | orchestrator | 2025-07-06 20:34:20.563850 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-07-06 20:34:20.563861 | orchestrator | Sunday 06 July 2025 20:29:09 +0000 (0:00:00.526) 0:03:07.913 *********** 2025-07-06 20:34:20.563871 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-06 20:34:20.563882 | orchestrator | 2025-07-06 20:34:20.563892 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-07-06 20:34:20.563903 | orchestrator | Sunday 06 July 2025 20:29:09 +0000 (0:00:00.685) 0:03:08.599 *********** 2025-07-06 20:34:20.563921 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.563932 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.563943 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.563956 | orchestrator | 2025-07-06 20:34:20.563975 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-06 20:34:20.563991 | orchestrator | Sunday 06 July 2025 20:29:10 +0000 (0:00:00.315) 0:03:08.914 *********** 2025-07-06 20:34:20.564002 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:34:20.564013 | orchestrator | 2025-07-06 20:34:20.564023 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-06 20:34:20.564034 | orchestrator | Sunday 06 July 2025 20:29:10 +0000 (0:00:00.692) 0:03:09.607 *********** 2025-07-06 20:34:20.564051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.564074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.564088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.564108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.564124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.564137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.564157 | orchestrator | 2025-07-06 20:34:20.564175 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-06 20:34:20.564187 | orchestrator | Sunday 06 July 2025 20:29:12 +0000 (0:00:02.213) 0:03:11.820 *********** 2025-07-06 20:34:20.564207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:34:20.564220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.564238 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.564249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:34:20.564267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.564278 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.564303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:34:20.564325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.564344 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.564355 | orchestrator | 2025-07-06 20:34:20.564366 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-06 20:34:20.564377 | orchestrator | Sunday 06 July 2025 20:29:13 +0000 (0:00:00.563) 0:03:12.384 *********** 2025-07-06 20:34:20.564388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:34:20.564550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.564597 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.564666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:34:20.564681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.564704 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.564716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:34:20.564727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.564739 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.564750 | orchestrator | 2025-07-06 20:34:20.564761 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-07-06 20:34:20.564772 | orchestrator | Sunday 06 July 2025 20:29:14 +0000 (0:00:00.950) 0:03:13.334 *********** 2025-07-06 20:34:20.564823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.564839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.564858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.564875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.564887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.564906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.564918 | orchestrator | 2025-07-06 20:34:20.564929 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-07-06 20:34:20.564946 | orchestrator | Sunday 06 July 2025 20:29:16 +0000 (0:00:02.328) 0:03:15.662 *********** 2025-07-06 20:34:20.564958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.564970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.564991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.565000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.565037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.565046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.565055 | orchestrator | 2025-07-06 20:34:20.565062 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-07-06 20:34:20.565070 | orchestrator | Sunday 06 July 2025 20:29:22 +0000 (0:00:05.572) 0:03:21.235 *********** 2025-07-06 20:34:20.565083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:34:20.565092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.565100 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.565116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:34:20.565131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.565139 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.565147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-06 20:34:20.565183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.565192 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.565200 | orchestrator | 2025-07-06 20:34:20.565208 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-07-06 20:34:20.565216 | orchestrator | Sunday 06 July 2025 20:29:22 +0000 (0:00:00.586) 0:03:21.821 *********** 2025-07-06 20:34:20.565224 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.565232 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:34:20.565239 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:34:20.565252 | orchestrator | 2025-07-06 20:34:20.565260 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-07-06 20:34:20.565268 | orchestrator | Sunday 06 July 2025 20:29:24 +0000 (0:00:01.893) 0:03:23.715 *********** 2025-07-06 20:34:20.565276 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.565289 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.565297 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.565305 | orchestrator | 2025-07-06 20:34:20.565312 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-07-06 20:34:20.565320 | orchestrator | Sunday 06 July 2025 20:29:25 +0000 (0:00:00.321) 0:03:24.037 *********** 2025-07-06 20:34:20.565347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.565357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.565370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-06 20:34:20.565390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.565399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.565435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.565444 | orchestrator | 2025-07-06 20:34:20.565452 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-06 20:34:20.565460 | orchestrator | Sunday 06 July 2025 20:29:27 +0000 (0:00:01.925) 0:03:25.962 *********** 2025-07-06 20:34:20.565468 | orchestrator | 2025-07-06 20:34:20.565476 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-06 20:34:20.565484 | orchestrator | Sunday 06 July 2025 20:29:27 +0000 (0:00:00.141) 0:03:26.104 *********** 2025-07-06 20:34:20.565491 | orchestrator | 2025-07-06 20:34:20.565521 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-06 20:34:20.565529 | orchestrator | Sunday 06 July 2025 20:29:27 +0000 (0:00:00.146) 0:03:26.251 *********** 2025-07-06 20:34:20.565537 | orchestrator | 2025-07-06 20:34:20.565545 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-07-06 20:34:20.565552 | orchestrator | Sunday 06 July 2025 20:29:27 +0000 (0:00:00.269) 0:03:26.520 *********** 2025-07-06 20:34:20.565560 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.565568 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:34:20.565576 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:34:20.565584 | orchestrator | 2025-07-06 20:34:20.565592 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-07-06 20:34:20.565599 | orchestrator | Sunday 06 July 2025 20:29:45 +0000 (0:00:18.248) 0:03:44.768 *********** 2025-07-06 20:34:20.565607 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.565615 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:34:20.565622 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:34:20.565630 | orchestrator | 2025-07-06 20:34:20.565638 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-07-06 20:34:20.565646 | orchestrator | 2025-07-06 20:34:20.565671 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-06 20:34:20.565685 | orchestrator | Sunday 06 July 2025 20:29:51 +0000 (0:00:05.268) 0:03:50.037 *********** 2025-07-06 20:34:20.565694 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:34:20.565702 | orchestrator | 2025-07-06 20:34:20.565710 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-06 20:34:20.565718 | orchestrator | Sunday 06 July 2025 20:29:52 +0000 (0:00:01.163) 0:03:51.200 *********** 2025-07-06 20:34:20.565725 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.565733 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.565741 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.565753 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.565761 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.565769 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.565777 | orchestrator | 2025-07-06 20:34:20.565785 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-07-06 20:34:20.565793 | orchestrator | Sunday 06 July 2025 20:29:53 +0000 (0:00:00.758) 0:03:51.958 *********** 2025-07-06 20:34:20.565800 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.565808 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.565833 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.565842 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:34:20.565850 | orchestrator | 2025-07-06 20:34:20.565858 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-06 20:34:20.565865 | orchestrator | Sunday 06 July 2025 20:29:54 +0000 (0:00:00.961) 0:03:52.920 *********** 2025-07-06 20:34:20.565874 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-06 20:34:20.565882 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-06 20:34:20.565890 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-06 20:34:20.565898 | orchestrator | 2025-07-06 20:34:20.565912 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-06 20:34:20.565920 | orchestrator | Sunday 06 July 2025 20:29:54 +0000 (0:00:00.747) 0:03:53.668 *********** 2025-07-06 20:34:20.565928 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-06 20:34:20.565936 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-06 20:34:20.565944 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-06 20:34:20.565952 | orchestrator | 2025-07-06 20:34:20.565959 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-06 20:34:20.565967 | orchestrator | Sunday 06 July 2025 20:29:56 +0000 (0:00:01.326) 0:03:54.994 *********** 2025-07-06 20:34:20.565975 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-07-06 20:34:20.565983 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.565991 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-07-06 20:34:20.565999 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.566006 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-07-06 20:34:20.566064 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.566076 | orchestrator | 2025-07-06 20:34:20.566084 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-07-06 20:34:20.566092 | orchestrator | Sunday 06 July 2025 20:29:56 +0000 (0:00:00.685) 0:03:55.680 *********** 2025-07-06 20:34:20.566100 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:34:20.566107 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:34:20.566115 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.566123 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:34:20.566131 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:34:20.566147 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.566155 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-06 20:34:20.566163 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-06 20:34:20.566171 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.566179 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-06 20:34:20.566204 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-06 20:34:20.566212 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-06 20:34:20.566220 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-06 20:34:20.566228 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-06 20:34:20.566235 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-06 20:34:20.566243 | orchestrator | 2025-07-06 20:34:20.566251 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-07-06 20:34:20.566259 | orchestrator | Sunday 06 July 2025 20:29:58 +0000 (0:00:02.017) 0:03:57.698 *********** 2025-07-06 20:34:20.566266 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.566274 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.566282 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.566289 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.566297 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.566305 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.566312 | orchestrator | 2025-07-06 20:34:20.566320 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-07-06 20:34:20.566328 | orchestrator | Sunday 06 July 2025 20:30:00 +0000 (0:00:01.386) 0:03:59.084 *********** 2025-07-06 20:34:20.566335 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.566343 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.566351 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.566358 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.566366 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.566373 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.566381 | orchestrator | 2025-07-06 20:34:20.566389 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-06 20:34:20.566396 | orchestrator | Sunday 06 July 2025 20:30:01 +0000 (0:00:01.578) 0:04:00.663 *********** 2025-07-06 20:34:20.566431 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566458 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566490 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566498 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566511 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566539 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566548 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566643 | orchestrator | 2025-07-06 20:34:20.566651 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-06 20:34:20.566659 | orchestrator | Sunday 06 July 2025 20:30:04 +0000 (0:00:02.452) 0:04:03.115 *********** 2025-07-06 20:34:20.566667 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:34:20.566676 | orchestrator | 2025-07-06 20:34:20.566684 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-06 20:34:20.566692 | orchestrator | Sunday 06 July 2025 20:30:05 +0000 (0:00:01.239) 0:04:04.355 *********** 2025-07-06 20:34:20.566700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566721 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566765 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566836 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.566897 | orchestrator | 2025-07-06 20:34:20.566905 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-06 20:34:20.566913 | orchestrator | Sunday 06 July 2025 20:30:09 +0000 (0:00:03.767) 0:04:08.122 *********** 2025-07-06 20:34:20.566927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.566936 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.566944 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.566952 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.566961 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.566973 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.566991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567000 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.567008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.567017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.567025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567033 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.567042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:34:20.567054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567067 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.567080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:34:20.567088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567096 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.567104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:34:20.567112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567120 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.567128 | orchestrator | 2025-07-06 20:34:20.567136 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-06 20:34:20.567144 | orchestrator | Sunday 06 July 2025 20:30:11 +0000 (0:00:01.828) 0:04:09.951 *********** 2025-07-06 20:34:20.567153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.567170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.567186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.567196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.567214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567223 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.567237 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.567251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.567260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.567275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567284 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.567293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:34:20.567302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567311 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.567320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:34:20.567338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567347 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.567361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:34:20.567377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.567386 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.567395 | orchestrator | 2025-07-06 20:34:20.567440 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-06 20:34:20.567455 | orchestrator | Sunday 06 July 2025 20:30:13 +0000 (0:00:01.922) 0:04:11.874 *********** 2025-07-06 20:34:20.567470 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.567479 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.567488 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.567496 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-06 20:34:20.567505 | orchestrator | 2025-07-06 20:34:20.567513 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-07-06 20:34:20.567522 | orchestrator | Sunday 06 July 2025 20:30:13 +0000 (0:00:00.828) 0:04:12.703 *********** 2025-07-06 20:34:20.567531 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 20:34:20.567539 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-06 20:34:20.567548 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-06 20:34:20.567556 | orchestrator | 2025-07-06 20:34:20.567565 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-07-06 20:34:20.567573 | orchestrator | Sunday 06 July 2025 20:30:14 +0000 (0:00:01.031) 0:04:13.735 *********** 2025-07-06 20:34:20.567582 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 20:34:20.567590 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-06 20:34:20.567599 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-06 20:34:20.567607 | orchestrator | 2025-07-06 20:34:20.567616 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-07-06 20:34:20.567624 | orchestrator | Sunday 06 July 2025 20:30:15 +0000 (0:00:00.922) 0:04:14.657 *********** 2025-07-06 20:34:20.567641 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:34:20.567650 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:34:20.567658 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:34:20.567667 | orchestrator | 2025-07-06 20:34:20.567675 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-07-06 20:34:20.567684 | orchestrator | Sunday 06 July 2025 20:30:16 +0000 (0:00:00.517) 0:04:15.175 *********** 2025-07-06 20:34:20.567692 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:34:20.567701 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:34:20.567709 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:34:20.567718 | orchestrator | 2025-07-06 20:34:20.567726 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-07-06 20:34:20.567735 | orchestrator | Sunday 06 July 2025 20:30:16 +0000 (0:00:00.502) 0:04:15.678 *********** 2025-07-06 20:34:20.567743 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-06 20:34:20.567752 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-06 20:34:20.567761 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-06 20:34:20.567769 | orchestrator | 2025-07-06 20:34:20.567778 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-07-06 20:34:20.567787 | orchestrator | Sunday 06 July 2025 20:30:18 +0000 (0:00:01.367) 0:04:17.045 *********** 2025-07-06 20:34:20.567795 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-06 20:34:20.567804 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-06 20:34:20.567812 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-06 20:34:20.567821 | orchestrator | 2025-07-06 20:34:20.567829 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-07-06 20:34:20.567837 | orchestrator | Sunday 06 July 2025 20:30:19 +0000 (0:00:01.247) 0:04:18.293 *********** 2025-07-06 20:34:20.567846 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-06 20:34:20.567855 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-06 20:34:20.567863 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-06 20:34:20.567872 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-07-06 20:34:20.567880 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-07-06 20:34:20.567889 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-07-06 20:34:20.567897 | orchestrator | 2025-07-06 20:34:20.567906 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-07-06 20:34:20.567914 | orchestrator | Sunday 06 July 2025 20:30:23 +0000 (0:00:03.695) 0:04:21.988 *********** 2025-07-06 20:34:20.567923 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.567931 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.567944 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.567953 | orchestrator | 2025-07-06 20:34:20.567961 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-07-06 20:34:20.567970 | orchestrator | Sunday 06 July 2025 20:30:23 +0000 (0:00:00.292) 0:04:22.281 *********** 2025-07-06 20:34:20.567978 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.567987 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.567995 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.568004 | orchestrator | 2025-07-06 20:34:20.568012 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-07-06 20:34:20.568020 | orchestrator | Sunday 06 July 2025 20:30:23 +0000 (0:00:00.298) 0:04:22.580 *********** 2025-07-06 20:34:20.568029 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.568038 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.568046 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.568055 | orchestrator | 2025-07-06 20:34:20.568063 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-07-06 20:34:20.568072 | orchestrator | Sunday 06 July 2025 20:30:25 +0000 (0:00:01.399) 0:04:23.979 *********** 2025-07-06 20:34:20.568086 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-06 20:34:20.568102 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-06 20:34:20.568111 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-06 20:34:20.568120 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-06 20:34:20.568128 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-06 20:34:20.568137 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-06 20:34:20.568146 | orchestrator | 2025-07-06 20:34:20.568154 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-07-06 20:34:20.568163 | orchestrator | Sunday 06 July 2025 20:30:28 +0000 (0:00:03.102) 0:04:27.082 *********** 2025-07-06 20:34:20.568171 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:34:20.568180 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:34:20.568189 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:34:20.568197 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-06 20:34:20.568206 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.568214 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-06 20:34:20.568223 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.568231 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-06 20:34:20.568240 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.568248 | orchestrator | 2025-07-06 20:34:20.568257 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-07-06 20:34:20.568265 | orchestrator | Sunday 06 July 2025 20:30:31 +0000 (0:00:03.293) 0:04:30.376 *********** 2025-07-06 20:34:20.568274 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.568282 | orchestrator | 2025-07-06 20:34:20.568291 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-07-06 20:34:20.568299 | orchestrator | Sunday 06 July 2025 20:30:31 +0000 (0:00:00.131) 0:04:30.507 *********** 2025-07-06 20:34:20.568308 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.568316 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.568325 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.568333 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.568342 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.568350 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.568359 | orchestrator | 2025-07-06 20:34:20.568367 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-07-06 20:34:20.568376 | orchestrator | Sunday 06 July 2025 20:30:32 +0000 (0:00:00.761) 0:04:31.268 *********** 2025-07-06 20:34:20.568385 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-06 20:34:20.568393 | orchestrator | 2025-07-06 20:34:20.568447 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-07-06 20:34:20.568459 | orchestrator | Sunday 06 July 2025 20:30:33 +0000 (0:00:00.654) 0:04:31.923 *********** 2025-07-06 20:34:20.568468 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.568476 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.568485 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.568494 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.568502 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.568511 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.568519 | orchestrator | 2025-07-06 20:34:20.568528 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-07-06 20:34:20.568537 | orchestrator | Sunday 06 July 2025 20:30:33 +0000 (0:00:00.559) 0:04:32.482 *********** 2025-07-06 20:34:20.568557 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568736 | orchestrator | 2025-07-06 20:34:20.568744 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-07-06 20:34:20.568753 | orchestrator | Sunday 06 July 2025 20:30:37 +0000 (0:00:03.851) 0:04:36.334 *********** 2025-07-06 20:34:20.568762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.568771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.568786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.568799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.568815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.568825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.568833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.568983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.569000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.569009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.569018 | orchestrator | 2025-07-06 20:34:20.569027 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-07-06 20:34:20.569036 | orchestrator | Sunday 06 July 2025 20:30:43 +0000 (0:00:06.229) 0:04:42.563 *********** 2025-07-06 20:34:20.569044 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.569053 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.569061 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.569070 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.569083 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.569091 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.569100 | orchestrator | 2025-07-06 20:34:20.569108 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-07-06 20:34:20.569117 | orchestrator | Sunday 06 July 2025 20:30:45 +0000 (0:00:01.583) 0:04:44.146 *********** 2025-07-06 20:34:20.569125 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-06 20:34:20.569134 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-06 20:34:20.569142 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-06 20:34:20.569151 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-06 20:34:20.569163 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-06 20:34:20.569178 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-06 20:34:20.569198 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.569213 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-06 20:34:20.569227 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-06 20:34:20.569239 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.569248 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-06 20:34:20.569257 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.569265 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-06 20:34:20.569274 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-06 20:34:20.569282 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-06 20:34:20.569291 | orchestrator | 2025-07-06 20:34:20.569299 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-07-06 20:34:20.569308 | orchestrator | Sunday 06 July 2025 20:30:48 +0000 (0:00:03.585) 0:04:47.732 *********** 2025-07-06 20:34:20.569322 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.569330 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.569339 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.569347 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.569356 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.569364 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.569377 | orchestrator | 2025-07-06 20:34:20.569391 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-07-06 20:34:20.569433 | orchestrator | Sunday 06 July 2025 20:30:49 +0000 (0:00:00.789) 0:04:48.521 *********** 2025-07-06 20:34:20.569447 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-06 20:34:20.569466 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-06 20:34:20.569478 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-06 20:34:20.569490 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-06 20:34:20.569502 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-06 20:34:20.569515 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-06 20:34:20.569527 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-06 20:34:20.569539 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-06 20:34:20.569553 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-06 20:34:20.569568 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-06 20:34:20.569582 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.569597 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-06 20:34:20.569609 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.569618 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-06 20:34:20.569626 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.569635 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:34:20.569643 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:34:20.569652 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:34:20.569660 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:34:20.569675 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:34:20.569684 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-06 20:34:20.569693 | orchestrator | 2025-07-06 20:34:20.569702 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-07-06 20:34:20.569712 | orchestrator | Sunday 06 July 2025 20:30:54 +0000 (0:00:05.086) 0:04:53.608 *********** 2025-07-06 20:34:20.569721 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:34:20.569731 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:34:20.569740 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-06 20:34:20.569759 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-06 20:34:20.569777 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:34:20.569787 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-06 20:34:20.569796 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-06 20:34:20.569806 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:34:20.569815 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:34:20.569825 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-06 20:34:20.569834 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-06 20:34:20.569844 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.569853 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:34:20.569863 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-06 20:34:20.569872 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-06 20:34:20.569881 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.569891 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-06 20:34:20.569900 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.569910 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:34:20.569920 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:34:20.569929 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-06 20:34:20.569939 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:34:20.569948 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:34:20.569957 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-06 20:34:20.569967 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:34:20.569976 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:34:20.569986 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-06 20:34:20.569995 | orchestrator | 2025-07-06 20:34:20.570005 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-07-06 20:34:20.570015 | orchestrator | Sunday 06 July 2025 20:31:01 +0000 (0:00:07.014) 0:05:00.623 *********** 2025-07-06 20:34:20.570055 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.570064 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.570074 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.570084 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.570093 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.570102 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.570112 | orchestrator | 2025-07-06 20:34:20.570121 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-07-06 20:34:20.570131 | orchestrator | Sunday 06 July 2025 20:31:02 +0000 (0:00:00.564) 0:05:01.188 *********** 2025-07-06 20:34:20.570141 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.570150 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.570159 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.570169 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.570178 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.570188 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.570205 | orchestrator | 2025-07-06 20:34:20.570214 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-07-06 20:34:20.570224 | orchestrator | Sunday 06 July 2025 20:31:03 +0000 (0:00:00.762) 0:05:01.950 *********** 2025-07-06 20:34:20.570233 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.570243 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.570252 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.570262 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.570271 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.570280 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.570290 | orchestrator | 2025-07-06 20:34:20.570300 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-07-06 20:34:20.570309 | orchestrator | Sunday 06 July 2025 20:31:05 +0000 (0:00:01.922) 0:05:03.873 *********** 2025-07-06 20:34:20.570331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.570343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.570353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.570363 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.570373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.570391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.570463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.570477 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.570494 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-06 20:34:20.570505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-06 20:34:20.570515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.570525 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.570535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:34:20.570552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.570562 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.570576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:34:20.570593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.570603 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.570613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-06 20:34:20.570623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-06 20:34:20.570633 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.570642 | orchestrator | 2025-07-06 20:34:20.570652 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-07-06 20:34:20.570662 | orchestrator | Sunday 06 July 2025 20:31:06 +0000 (0:00:01.527) 0:05:05.400 *********** 2025-07-06 20:34:20.570672 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-06 20:34:20.570688 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-06 20:34:20.570698 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.570708 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-06 20:34:20.570717 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-06 20:34:20.570727 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.570737 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-06 20:34:20.570746 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-06 20:34:20.570756 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.570765 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-06 20:34:20.570775 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-06 20:34:20.570784 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.570794 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-06 20:34:20.570803 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-06 20:34:20.570813 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.570822 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-06 20:34:20.570832 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-06 20:34:20.570841 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.570851 | orchestrator | 2025-07-06 20:34:20.570861 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-07-06 20:34:20.570870 | orchestrator | Sunday 06 July 2025 20:31:07 +0000 (0:00:00.626) 0:05:06.027 *********** 2025-07-06 20:34:20.570884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.570902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.570913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-06 20:34:20.570929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.570940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.570950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-06 20:34:20.570965 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.570979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.570988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-06 20:34:20.570996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.571016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.571025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.571033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.571046 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.571060 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-06 20:34:20.571074 | orchestrator | 2025-07-06 20:34:20.571082 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-06 20:34:20.571090 | orchestrator | Sunday 06 July 2025 20:31:10 +0000 (0:00:03.126) 0:05:09.154 *********** 2025-07-06 20:34:20.571098 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.571106 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.571114 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.571121 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.571129 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.571137 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.571145 | orchestrator | 2025-07-06 20:34:20.571152 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:34:20.571160 | orchestrator | Sunday 06 July 2025 20:31:10 +0000 (0:00:00.593) 0:05:09.748 *********** 2025-07-06 20:34:20.571168 | orchestrator | 2025-07-06 20:34:20.571176 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:34:20.571184 | orchestrator | Sunday 06 July 2025 20:31:11 +0000 (0:00:00.298) 0:05:10.046 *********** 2025-07-06 20:34:20.571192 | orchestrator | 2025-07-06 20:34:20.571200 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:34:20.571207 | orchestrator | Sunday 06 July 2025 20:31:11 +0000 (0:00:00.134) 0:05:10.181 *********** 2025-07-06 20:34:20.571215 | orchestrator | 2025-07-06 20:34:20.571223 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:34:20.571231 | orchestrator | Sunday 06 July 2025 20:31:11 +0000 (0:00:00.136) 0:05:10.318 *********** 2025-07-06 20:34:20.571239 | orchestrator | 2025-07-06 20:34:20.571247 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:34:20.571255 | orchestrator | Sunday 06 July 2025 20:31:11 +0000 (0:00:00.131) 0:05:10.449 *********** 2025-07-06 20:34:20.571262 | orchestrator | 2025-07-06 20:34:20.571270 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-06 20:34:20.571278 | orchestrator | Sunday 06 July 2025 20:31:11 +0000 (0:00:00.124) 0:05:10.574 *********** 2025-07-06 20:34:20.571286 | orchestrator | 2025-07-06 20:34:20.571294 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-07-06 20:34:20.571301 | orchestrator | Sunday 06 July 2025 20:31:11 +0000 (0:00:00.123) 0:05:10.697 *********** 2025-07-06 20:34:20.571309 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:34:20.571317 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:34:20.571325 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.571333 | orchestrator | 2025-07-06 20:34:20.571340 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-07-06 20:34:20.571348 | orchestrator | Sunday 06 July 2025 20:31:21 +0000 (0:00:09.927) 0:05:20.624 *********** 2025-07-06 20:34:20.571356 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.571364 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:34:20.571372 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:34:20.571380 | orchestrator | 2025-07-06 20:34:20.571387 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-07-06 20:34:20.571395 | orchestrator | Sunday 06 July 2025 20:31:34 +0000 (0:00:12.457) 0:05:33.082 *********** 2025-07-06 20:34:20.571421 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.571429 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.571437 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.571445 | orchestrator | 2025-07-06 20:34:20.571453 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-07-06 20:34:20.571461 | orchestrator | Sunday 06 July 2025 20:32:02 +0000 (0:00:27.875) 0:06:00.957 *********** 2025-07-06 20:34:20.571468 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.571476 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.571484 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.571497 | orchestrator | 2025-07-06 20:34:20.571509 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-07-06 20:34:20.571517 | orchestrator | Sunday 06 July 2025 20:32:47 +0000 (0:00:44.983) 0:06:45.940 *********** 2025-07-06 20:34:20.571525 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.571532 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.571540 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.571548 | orchestrator | 2025-07-06 20:34:20.571556 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-07-06 20:34:20.571564 | orchestrator | Sunday 06 July 2025 20:32:48 +0000 (0:00:00.995) 0:06:46.936 *********** 2025-07-06 20:34:20.571571 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.571579 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.571587 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.571594 | orchestrator | 2025-07-06 20:34:20.571602 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-07-06 20:34:20.571610 | orchestrator | Sunday 06 July 2025 20:32:48 +0000 (0:00:00.826) 0:06:47.763 *********** 2025-07-06 20:34:20.571618 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:34:20.571626 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:34:20.571633 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:34:20.571641 | orchestrator | 2025-07-06 20:34:20.571653 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-07-06 20:34:20.571662 | orchestrator | Sunday 06 July 2025 20:33:14 +0000 (0:00:25.664) 0:07:13.427 *********** 2025-07-06 20:34:20.571669 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.571677 | orchestrator | 2025-07-06 20:34:20.571685 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-07-06 20:34:20.571693 | orchestrator | Sunday 06 July 2025 20:33:14 +0000 (0:00:00.134) 0:07:13.562 *********** 2025-07-06 20:34:20.571701 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.571708 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.571716 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.571724 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.571731 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.571739 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-07-06 20:34:20.571747 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:34:20.571755 | orchestrator | 2025-07-06 20:34:20.571763 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-07-06 20:34:20.571770 | orchestrator | Sunday 06 July 2025 20:33:36 +0000 (0:00:21.771) 0:07:35.333 *********** 2025-07-06 20:34:20.571778 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.571786 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.571794 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.571801 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.571809 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.571817 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.571824 | orchestrator | 2025-07-06 20:34:20.571832 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-07-06 20:34:20.571840 | orchestrator | Sunday 06 July 2025 20:33:44 +0000 (0:00:08.004) 0:07:43.338 *********** 2025-07-06 20:34:20.571848 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.571855 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.571863 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.571871 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.571878 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.571886 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-07-06 20:34:20.571894 | orchestrator | 2025-07-06 20:34:20.571902 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-06 20:34:20.571910 | orchestrator | Sunday 06 July 2025 20:33:48 +0000 (0:00:03.955) 0:07:47.293 *********** 2025-07-06 20:34:20.571924 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:34:20.571932 | orchestrator | 2025-07-06 20:34:20.571940 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-06 20:34:20.571948 | orchestrator | Sunday 06 July 2025 20:34:00 +0000 (0:00:11.760) 0:07:59.054 *********** 2025-07-06 20:34:20.571955 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:34:20.571963 | orchestrator | 2025-07-06 20:34:20.571971 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-07-06 20:34:20.571978 | orchestrator | Sunday 06 July 2025 20:34:01 +0000 (0:00:01.321) 0:08:00.375 *********** 2025-07-06 20:34:20.571986 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.571994 | orchestrator | 2025-07-06 20:34:20.572002 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-07-06 20:34:20.572009 | orchestrator | Sunday 06 July 2025 20:34:02 +0000 (0:00:01.302) 0:08:01.677 *********** 2025-07-06 20:34:20.572017 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:34:20.572025 | orchestrator | 2025-07-06 20:34:20.572032 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-07-06 20:34:20.572040 | orchestrator | Sunday 06 July 2025 20:34:13 +0000 (0:00:10.219) 0:08:11.896 *********** 2025-07-06 20:34:20.572048 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:34:20.572056 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:34:20.572064 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:34:20.572071 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:34:20.572079 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:34:20.572087 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:34:20.572095 | orchestrator | 2025-07-06 20:34:20.572102 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-07-06 20:34:20.572110 | orchestrator | 2025-07-06 20:34:20.572118 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-07-06 20:34:20.572126 | orchestrator | Sunday 06 July 2025 20:34:14 +0000 (0:00:01.595) 0:08:13.492 *********** 2025-07-06 20:34:20.572133 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:34:20.572141 | orchestrator | changed: [testbed-node-1] 2025-07-06 20:34:20.572149 | orchestrator | changed: [testbed-node-2] 2025-07-06 20:34:20.572157 | orchestrator | 2025-07-06 20:34:20.572164 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-07-06 20:34:20.572172 | orchestrator | 2025-07-06 20:34:20.572184 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-07-06 20:34:20.572192 | orchestrator | Sunday 06 July 2025 20:34:15 +0000 (0:00:01.067) 0:08:14.560 *********** 2025-07-06 20:34:20.572200 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.572208 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.572216 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.572223 | orchestrator | 2025-07-06 20:34:20.572231 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-07-06 20:34:20.572239 | orchestrator | 2025-07-06 20:34:20.572247 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-07-06 20:34:20.572254 | orchestrator | Sunday 06 July 2025 20:34:16 +0000 (0:00:00.479) 0:08:15.040 *********** 2025-07-06 20:34:20.572262 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-07-06 20:34:20.572270 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-06 20:34:20.572277 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-06 20:34:20.572285 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-07-06 20:34:20.572293 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-07-06 20:34:20.572305 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-07-06 20:34:20.572313 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:34:20.572321 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-07-06 20:34:20.572328 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-06 20:34:20.572341 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-06 20:34:20.572349 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-07-06 20:34:20.572357 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-07-06 20:34:20.572364 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-07-06 20:34:20.572372 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:34:20.572380 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-07-06 20:34:20.572388 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-06 20:34:20.572395 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-06 20:34:20.572422 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-07-06 20:34:20.572437 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-07-06 20:34:20.572450 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-07-06 20:34:20.572463 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:34:20.572474 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-07-06 20:34:20.572483 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-06 20:34:20.572491 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-06 20:34:20.572498 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-07-06 20:34:20.572506 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-07-06 20:34:20.572514 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-07-06 20:34:20.572521 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.572529 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-07-06 20:34:20.572537 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-06 20:34:20.572544 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-06 20:34:20.572552 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-07-06 20:34:20.572560 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-07-06 20:34:20.572567 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-07-06 20:34:20.572575 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.572582 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-07-06 20:34:20.572590 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-06 20:34:20.572598 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-06 20:34:20.572605 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-07-06 20:34:20.572613 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-07-06 20:34:20.572621 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-07-06 20:34:20.572628 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.572636 | orchestrator | 2025-07-06 20:34:20.572644 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-07-06 20:34:20.572651 | orchestrator | 2025-07-06 20:34:20.572659 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-07-06 20:34:20.572667 | orchestrator | Sunday 06 July 2025 20:34:17 +0000 (0:00:01.262) 0:08:16.303 *********** 2025-07-06 20:34:20.572674 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-07-06 20:34:20.572682 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-07-06 20:34:20.572690 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.572698 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-07-06 20:34:20.572705 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-07-06 20:34:20.572713 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.572721 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-07-06 20:34:20.572728 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-07-06 20:34:20.572741 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.572749 | orchestrator | 2025-07-06 20:34:20.572757 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-07-06 20:34:20.572764 | orchestrator | 2025-07-06 20:34:20.572772 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-07-06 20:34:20.572780 | orchestrator | Sunday 06 July 2025 20:34:18 +0000 (0:00:00.660) 0:08:16.963 *********** 2025-07-06 20:34:20.572787 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.572795 | orchestrator | 2025-07-06 20:34:20.572806 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-07-06 20:34:20.572814 | orchestrator | 2025-07-06 20:34:20.572822 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-07-06 20:34:20.572830 | orchestrator | Sunday 06 July 2025 20:34:18 +0000 (0:00:00.646) 0:08:17.609 *********** 2025-07-06 20:34:20.572837 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:34:20.572845 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:34:20.572853 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:34:20.572860 | orchestrator | 2025-07-06 20:34:20.572868 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:34:20.572876 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:34:20.572884 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-07-06 20:34:20.572897 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-06 20:34:20.572906 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-06 20:34:20.572913 | orchestrator | testbed-node-3 : ok=38  changed=28  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-06 20:34:20.572921 | orchestrator | testbed-node-4 : ok=42  changed=28  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-07-06 20:34:20.572929 | orchestrator | testbed-node-5 : ok=37  changed=28  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-06 20:34:20.572936 | orchestrator | 2025-07-06 20:34:20.572944 | orchestrator | 2025-07-06 20:34:20.572952 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:34:20.572960 | orchestrator | Sunday 06 July 2025 20:34:19 +0000 (0:00:00.430) 0:08:18.040 *********** 2025-07-06 20:34:20.572968 | orchestrator | =============================================================================== 2025-07-06 20:34:20.572975 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.98s 2025-07-06 20:34:20.572983 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.03s 2025-07-06 20:34:20.572991 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 27.88s 2025-07-06 20:34:20.572998 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.66s 2025-07-06 20:34:20.573006 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.77s 2025-07-06 20:34:20.573014 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.84s 2025-07-06 20:34:20.573021 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.25s 2025-07-06 20:34:20.573029 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.34s 2025-07-06 20:34:20.573037 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.84s 2025-07-06 20:34:20.573044 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.46s 2025-07-06 20:34:20.573057 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.76s 2025-07-06 20:34:20.573065 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.71s 2025-07-06 20:34:20.573078 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.58s 2025-07-06 20:34:20.573087 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.41s 2025-07-06 20:34:20.573095 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.22s 2025-07-06 20:34:20.573103 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 9.93s 2025-07-06 20:34:20.573110 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.83s 2025-07-06 20:34:20.573118 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.00s 2025-07-06 20:34:20.573126 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.45s 2025-07-06 20:34:20.573134 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.01s 2025-07-06 20:34:20.573141 | orchestrator | 2025-07-06 20:34:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:23.598883 | orchestrator | 2025-07-06 20:34:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:26.645299 | orchestrator | 2025-07-06 20:34:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:29.684781 | orchestrator | 2025-07-06 20:34:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:32.727342 | orchestrator | 2025-07-06 20:34:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:35.766165 | orchestrator | 2025-07-06 20:34:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:38.804959 | orchestrator | 2025-07-06 20:34:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:41.842930 | orchestrator | 2025-07-06 20:34:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:44.880772 | orchestrator | 2025-07-06 20:34:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:47.928891 | orchestrator | 2025-07-06 20:34:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:50.968551 | orchestrator | 2025-07-06 20:34:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:54.010225 | orchestrator | 2025-07-06 20:34:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:34:57.051511 | orchestrator | 2025-07-06 20:34:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:35:00.091639 | orchestrator | 2025-07-06 20:35:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:35:03.132139 | orchestrator | 2025-07-06 20:35:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:35:06.171011 | orchestrator | 2025-07-06 20:35:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:35:09.212888 | orchestrator | 2025-07-06 20:35:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:35:12.250829 | orchestrator | 2025-07-06 20:35:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:35:15.287368 | orchestrator | 2025-07-06 20:35:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:35:18.331583 | orchestrator | 2025-07-06 20:35:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-06 20:35:21.371164 | orchestrator | 2025-07-06 20:35:21.641025 | orchestrator | 2025-07-06 20:35:21.646197 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Jul 6 20:35:21 UTC 2025 2025-07-06 20:35:21.646291 | orchestrator | 2025-07-06 20:35:21.992478 | orchestrator | ok: Runtime: 0:35:47.058009 2025-07-06 20:35:22.232544 | 2025-07-06 20:35:22.232873 | TASK [Bootstrap services] 2025-07-06 20:35:22.934891 | orchestrator | 2025-07-06 20:35:22.935205 | orchestrator | # BOOTSTRAP 2025-07-06 20:35:22.935232 | orchestrator | 2025-07-06 20:35:22.935246 | orchestrator | + set -e 2025-07-06 20:35:22.935259 | orchestrator | + echo 2025-07-06 20:35:22.935273 | orchestrator | + echo '# BOOTSTRAP' 2025-07-06 20:35:22.935290 | orchestrator | + echo 2025-07-06 20:35:22.935336 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-07-06 20:35:22.947828 | orchestrator | + set -e 2025-07-06 20:35:22.947895 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-07-06 20:35:25.019802 | orchestrator | 2025-07-06 20:35:25 | INFO  | It takes a moment until task 01e32289-5abb-46cb-8bbc-7b46a6971bdd (flavor-manager) has been started and output is visible here. 2025-07-06 20:35:33.639558 | orchestrator | 2025-07-06 20:35:29 | INFO  | Flavor SCS-1V-4 created 2025-07-06 20:35:33.639695 | orchestrator | 2025-07-06 20:35:29 | INFO  | Flavor SCS-2V-8 created 2025-07-06 20:35:33.639714 | orchestrator | 2025-07-06 20:35:30 | INFO  | Flavor SCS-4V-16 created 2025-07-06 20:35:33.639727 | orchestrator | 2025-07-06 20:35:30 | INFO  | Flavor SCS-8V-32 created 2025-07-06 20:35:33.639739 | orchestrator | 2025-07-06 20:35:30 | INFO  | Flavor SCS-1V-2 created 2025-07-06 20:35:33.639751 | orchestrator | 2025-07-06 20:35:30 | INFO  | Flavor SCS-2V-4 created 2025-07-06 20:35:33.639762 | orchestrator | 2025-07-06 20:35:30 | INFO  | Flavor SCS-4V-8 created 2025-07-06 20:35:33.639775 | orchestrator | 2025-07-06 20:35:30 | INFO  | Flavor SCS-8V-16 created 2025-07-06 20:35:33.639798 | orchestrator | 2025-07-06 20:35:30 | INFO  | Flavor SCS-16V-32 created 2025-07-06 20:35:33.639810 | orchestrator | 2025-07-06 20:35:31 | INFO  | Flavor SCS-1V-8 created 2025-07-06 20:35:33.639822 | orchestrator | 2025-07-06 20:35:31 | INFO  | Flavor SCS-2V-16 created 2025-07-06 20:35:33.639833 | orchestrator | 2025-07-06 20:35:31 | INFO  | Flavor SCS-4V-32 created 2025-07-06 20:35:33.639844 | orchestrator | 2025-07-06 20:35:31 | INFO  | Flavor SCS-1L-1 created 2025-07-06 20:35:33.639855 | orchestrator | 2025-07-06 20:35:31 | INFO  | Flavor SCS-2V-4-20s created 2025-07-06 20:35:33.639866 | orchestrator | 2025-07-06 20:35:31 | INFO  | Flavor SCS-4V-16-100s created 2025-07-06 20:35:33.639877 | orchestrator | 2025-07-06 20:35:31 | INFO  | Flavor SCS-1V-4-10 created 2025-07-06 20:35:33.639889 | orchestrator | 2025-07-06 20:35:31 | INFO  | Flavor SCS-2V-8-20 created 2025-07-06 20:35:33.639900 | orchestrator | 2025-07-06 20:35:32 | INFO  | Flavor SCS-4V-16-50 created 2025-07-06 20:35:33.639911 | orchestrator | 2025-07-06 20:35:32 | INFO  | Flavor SCS-8V-32-100 created 2025-07-06 20:35:33.639922 | orchestrator | 2025-07-06 20:35:32 | INFO  | Flavor SCS-1V-2-5 created 2025-07-06 20:35:33.639933 | orchestrator | 2025-07-06 20:35:32 | INFO  | Flavor SCS-2V-4-10 created 2025-07-06 20:35:33.639944 | orchestrator | 2025-07-06 20:35:32 | INFO  | Flavor SCS-4V-8-20 created 2025-07-06 20:35:33.639956 | orchestrator | 2025-07-06 20:35:32 | INFO  | Flavor SCS-8V-16-50 created 2025-07-06 20:35:33.639967 | orchestrator | 2025-07-06 20:35:32 | INFO  | Flavor SCS-16V-32-100 created 2025-07-06 20:35:33.639978 | orchestrator | 2025-07-06 20:35:33 | INFO  | Flavor SCS-1V-8-20 created 2025-07-06 20:35:33.639989 | orchestrator | 2025-07-06 20:35:33 | INFO  | Flavor SCS-2V-16-50 created 2025-07-06 20:35:33.640000 | orchestrator | 2025-07-06 20:35:33 | INFO  | Flavor SCS-4V-32-100 created 2025-07-06 20:35:33.640012 | orchestrator | 2025-07-06 20:35:33 | INFO  | Flavor SCS-1L-1-5 created 2025-07-06 20:35:35.744770 | orchestrator | 2025-07-06 20:35:35 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-07-06 20:35:45.946982 | orchestrator | 2025-07-06 20:35:45 | INFO  | Task 453f08aa-8f0d-4267-83df-fb4b2bcdf6e9 (bootstrap-basic) was prepared for execution. 2025-07-06 20:35:45.947118 | orchestrator | 2025-07-06 20:35:45 | INFO  | It takes a moment until task 453f08aa-8f0d-4267-83df-fb4b2bcdf6e9 (bootstrap-basic) has been started and output is visible here. 2025-07-06 20:36:48.602368 | orchestrator | 2025-07-06 20:36:48.602491 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-07-06 20:36:48.602508 | orchestrator | 2025-07-06 20:36:48.602574 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-06 20:36:48.602586 | orchestrator | Sunday 06 July 2025 20:35:49 +0000 (0:00:00.072) 0:00:00.072 *********** 2025-07-06 20:36:48.602598 | orchestrator | ok: [localhost] 2025-07-06 20:36:48.602610 | orchestrator | 2025-07-06 20:36:48.602621 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-07-06 20:36:48.602634 | orchestrator | Sunday 06 July 2025 20:35:51 +0000 (0:00:01.847) 0:00:01.920 *********** 2025-07-06 20:36:48.602645 | orchestrator | ok: [localhost] 2025-07-06 20:36:48.602656 | orchestrator | 2025-07-06 20:36:48.602667 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-07-06 20:36:48.602678 | orchestrator | Sunday 06 July 2025 20:36:01 +0000 (0:00:09.674) 0:00:11.595 *********** 2025-07-06 20:36:48.602689 | orchestrator | changed: [localhost] 2025-07-06 20:36:48.602700 | orchestrator | 2025-07-06 20:36:48.602711 | orchestrator | TASK [Get volume type local] *************************************************** 2025-07-06 20:36:48.602722 | orchestrator | Sunday 06 July 2025 20:36:08 +0000 (0:00:07.528) 0:00:19.123 *********** 2025-07-06 20:36:48.602733 | orchestrator | ok: [localhost] 2025-07-06 20:36:48.602745 | orchestrator | 2025-07-06 20:36:48.602756 | orchestrator | TASK [Create volume type local] ************************************************ 2025-07-06 20:36:48.602767 | orchestrator | Sunday 06 July 2025 20:36:15 +0000 (0:00:07.013) 0:00:26.137 *********** 2025-07-06 20:36:48.602777 | orchestrator | changed: [localhost] 2025-07-06 20:36:48.602793 | orchestrator | 2025-07-06 20:36:48.602804 | orchestrator | TASK [Create public network] *************************************************** 2025-07-06 20:36:48.602814 | orchestrator | Sunday 06 July 2025 20:36:22 +0000 (0:00:06.572) 0:00:32.709 *********** 2025-07-06 20:36:48.602825 | orchestrator | changed: [localhost] 2025-07-06 20:36:48.602836 | orchestrator | 2025-07-06 20:36:48.602846 | orchestrator | TASK [Set public network to default] ******************************************* 2025-07-06 20:36:48.602857 | orchestrator | Sunday 06 July 2025 20:36:29 +0000 (0:00:06.988) 0:00:39.698 *********** 2025-07-06 20:36:48.602867 | orchestrator | changed: [localhost] 2025-07-06 20:36:48.602880 | orchestrator | 2025-07-06 20:36:48.602903 | orchestrator | TASK [Create public subnet] **************************************************** 2025-07-06 20:36:48.602916 | orchestrator | Sunday 06 July 2025 20:36:36 +0000 (0:00:07.168) 0:00:46.867 *********** 2025-07-06 20:36:48.602929 | orchestrator | changed: [localhost] 2025-07-06 20:36:48.602941 | orchestrator | 2025-07-06 20:36:48.602954 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-07-06 20:36:48.602967 | orchestrator | Sunday 06 July 2025 20:36:41 +0000 (0:00:04.282) 0:00:51.149 *********** 2025-07-06 20:36:48.602980 | orchestrator | changed: [localhost] 2025-07-06 20:36:48.602993 | orchestrator | 2025-07-06 20:36:48.603005 | orchestrator | TASK [Create manager role] ***************************************************** 2025-07-06 20:36:48.603018 | orchestrator | Sunday 06 July 2025 20:36:44 +0000 (0:00:03.873) 0:00:55.023 *********** 2025-07-06 20:36:48.603030 | orchestrator | ok: [localhost] 2025-07-06 20:36:48.603042 | orchestrator | 2025-07-06 20:36:48.603055 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:36:48.603068 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:36:48.603081 | orchestrator | 2025-07-06 20:36:48.603093 | orchestrator | 2025-07-06 20:36:48.603105 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:36:48.603119 | orchestrator | Sunday 06 July 2025 20:36:48 +0000 (0:00:03.493) 0:00:58.517 *********** 2025-07-06 20:36:48.603155 | orchestrator | =============================================================================== 2025-07-06 20:36:48.603167 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.67s 2025-07-06 20:36:48.603178 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.53s 2025-07-06 20:36:48.603189 | orchestrator | Set public network to default ------------------------------------------- 7.17s 2025-07-06 20:36:48.603199 | orchestrator | Get volume type local --------------------------------------------------- 7.01s 2025-07-06 20:36:48.603210 | orchestrator | Create public network --------------------------------------------------- 6.99s 2025-07-06 20:36:48.603221 | orchestrator | Create volume type local ------------------------------------------------ 6.57s 2025-07-06 20:36:48.603231 | orchestrator | Create public subnet ---------------------------------------------------- 4.28s 2025-07-06 20:36:48.603242 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.87s 2025-07-06 20:36:48.603253 | orchestrator | Create manager role ----------------------------------------------------- 3.49s 2025-07-06 20:36:48.603263 | orchestrator | Gathering Facts --------------------------------------------------------- 1.85s 2025-07-06 20:36:50.753899 | orchestrator | 2025-07-06 20:36:50 | INFO  | It takes a moment until task d9baebeb-f38e-4d6d-bea6-2d47fdbf322f (image-manager) has been started and output is visible here. 2025-07-06 20:37:29.720509 | orchestrator | 2025-07-06 20:36:54 | INFO  | Processing image 'Cirros 0.6.2' 2025-07-06 20:37:29.720654 | orchestrator | 2025-07-06 20:36:54 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-07-06 20:37:29.720674 | orchestrator | 2025-07-06 20:36:54 | INFO  | Importing image Cirros 0.6.2 2025-07-06 20:37:29.720687 | orchestrator | 2025-07-06 20:36:54 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-06 20:37:29.720699 | orchestrator | 2025-07-06 20:36:56 | INFO  | Waiting for image to leave queued state... 2025-07-06 20:37:29.720711 | orchestrator | 2025-07-06 20:36:58 | INFO  | Waiting for import to complete... 2025-07-06 20:37:29.720722 | orchestrator | 2025-07-06 20:37:08 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-07-06 20:37:29.720733 | orchestrator | 2025-07-06 20:37:08 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-07-06 20:37:29.720744 | orchestrator | 2025-07-06 20:37:08 | INFO  | Setting internal_version = 0.6.2 2025-07-06 20:37:29.720755 | orchestrator | 2025-07-06 20:37:08 | INFO  | Setting image_original_user = cirros 2025-07-06 20:37:29.720766 | orchestrator | 2025-07-06 20:37:08 | INFO  | Adding tag os:cirros 2025-07-06 20:37:29.720777 | orchestrator | 2025-07-06 20:37:08 | INFO  | Setting property architecture: x86_64 2025-07-06 20:37:29.720788 | orchestrator | 2025-07-06 20:37:08 | INFO  | Setting property hw_disk_bus: scsi 2025-07-06 20:37:29.720798 | orchestrator | 2025-07-06 20:37:09 | INFO  | Setting property hw_rng_model: virtio 2025-07-06 20:37:29.720809 | orchestrator | 2025-07-06 20:37:09 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-06 20:37:29.720820 | orchestrator | 2025-07-06 20:37:09 | INFO  | Setting property hw_watchdog_action: reset 2025-07-06 20:37:29.720831 | orchestrator | 2025-07-06 20:37:09 | INFO  | Setting property hypervisor_type: qemu 2025-07-06 20:37:29.720841 | orchestrator | 2025-07-06 20:37:10 | INFO  | Setting property os_distro: cirros 2025-07-06 20:37:29.720852 | orchestrator | 2025-07-06 20:37:10 | INFO  | Setting property replace_frequency: never 2025-07-06 20:37:29.720863 | orchestrator | 2025-07-06 20:37:10 | INFO  | Setting property uuid_validity: none 2025-07-06 20:37:29.720873 | orchestrator | 2025-07-06 20:37:10 | INFO  | Setting property provided_until: none 2025-07-06 20:37:29.720922 | orchestrator | 2025-07-06 20:37:10 | INFO  | Setting property image_description: Cirros 2025-07-06 20:37:29.720953 | orchestrator | 2025-07-06 20:37:11 | INFO  | Setting property image_name: Cirros 2025-07-06 20:37:29.720965 | orchestrator | 2025-07-06 20:37:11 | INFO  | Setting property internal_version: 0.6.2 2025-07-06 20:37:29.720980 | orchestrator | 2025-07-06 20:37:11 | INFO  | Setting property image_original_user: cirros 2025-07-06 20:37:29.720991 | orchestrator | 2025-07-06 20:37:11 | INFO  | Setting property os_version: 0.6.2 2025-07-06 20:37:29.721003 | orchestrator | 2025-07-06 20:37:11 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-06 20:37:29.721018 | orchestrator | 2025-07-06 20:37:12 | INFO  | Setting property image_build_date: 2023-05-30 2025-07-06 20:37:29.721030 | orchestrator | 2025-07-06 20:37:12 | INFO  | Checking status of 'Cirros 0.6.2' 2025-07-06 20:37:29.721043 | orchestrator | 2025-07-06 20:37:12 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-07-06 20:37:29.721062 | orchestrator | 2025-07-06 20:37:12 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-07-06 20:37:29.721082 | orchestrator | 2025-07-06 20:37:12 | INFO  | Processing image 'Cirros 0.6.3' 2025-07-06 20:37:29.721103 | orchestrator | 2025-07-06 20:37:12 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-07-06 20:37:29.721124 | orchestrator | 2025-07-06 20:37:12 | INFO  | Importing image Cirros 0.6.3 2025-07-06 20:37:29.721146 | orchestrator | 2025-07-06 20:37:12 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-06 20:37:29.721169 | orchestrator | 2025-07-06 20:37:14 | INFO  | Waiting for import to complete... 2025-07-06 20:37:29.721191 | orchestrator | 2025-07-06 20:37:24 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-07-06 20:37:29.721209 | orchestrator | 2025-07-06 20:37:24 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-07-06 20:37:29.721243 | orchestrator | 2025-07-06 20:37:24 | INFO  | Setting internal_version = 0.6.3 2025-07-06 20:37:29.721257 | orchestrator | 2025-07-06 20:37:24 | INFO  | Setting image_original_user = cirros 2025-07-06 20:37:29.721270 | orchestrator | 2025-07-06 20:37:24 | INFO  | Adding tag os:cirros 2025-07-06 20:37:29.721283 | orchestrator | 2025-07-06 20:37:25 | INFO  | Setting property architecture: x86_64 2025-07-06 20:37:29.721296 | orchestrator | 2025-07-06 20:37:25 | INFO  | Setting property hw_disk_bus: scsi 2025-07-06 20:37:29.721309 | orchestrator | 2025-07-06 20:37:25 | INFO  | Setting property hw_rng_model: virtio 2025-07-06 20:37:29.721321 | orchestrator | 2025-07-06 20:37:25 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-06 20:37:29.721333 | orchestrator | 2025-07-06 20:37:26 | INFO  | Setting property hw_watchdog_action: reset 2025-07-06 20:37:29.721346 | orchestrator | 2025-07-06 20:37:26 | INFO  | Setting property hypervisor_type: qemu 2025-07-06 20:37:29.721359 | orchestrator | 2025-07-06 20:37:26 | INFO  | Setting property os_distro: cirros 2025-07-06 20:37:29.721371 | orchestrator | 2025-07-06 20:37:26 | INFO  | Setting property replace_frequency: never 2025-07-06 20:37:29.721382 | orchestrator | 2025-07-06 20:37:26 | INFO  | Setting property uuid_validity: none 2025-07-06 20:37:29.721392 | orchestrator | 2025-07-06 20:37:26 | INFO  | Setting property provided_until: none 2025-07-06 20:37:29.721446 | orchestrator | 2025-07-06 20:37:27 | INFO  | Setting property image_description: Cirros 2025-07-06 20:37:29.721458 | orchestrator | 2025-07-06 20:37:27 | INFO  | Setting property image_name: Cirros 2025-07-06 20:37:29.721469 | orchestrator | 2025-07-06 20:37:27 | INFO  | Setting property internal_version: 0.6.3 2025-07-06 20:37:29.721479 | orchestrator | 2025-07-06 20:37:27 | INFO  | Setting property image_original_user: cirros 2025-07-06 20:37:29.721490 | orchestrator | 2025-07-06 20:37:28 | INFO  | Setting property os_version: 0.6.3 2025-07-06 20:37:29.721501 | orchestrator | 2025-07-06 20:37:28 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-06 20:37:29.721511 | orchestrator | 2025-07-06 20:37:28 | INFO  | Setting property image_build_date: 2024-09-26 2025-07-06 20:37:29.721522 | orchestrator | 2025-07-06 20:37:28 | INFO  | Checking status of 'Cirros 0.6.3' 2025-07-06 20:37:29.721533 | orchestrator | 2025-07-06 20:37:28 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-07-06 20:37:29.721632 | orchestrator | 2025-07-06 20:37:28 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-07-06 20:37:29.974160 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-07-06 20:37:31.964371 | orchestrator | 2025-07-06 20:37:31 | INFO  | date: 2025-07-06 2025-07-06 20:37:31.964473 | orchestrator | 2025-07-06 20:37:31 | INFO  | image: octavia-amphora-haproxy-2024.2.20250706.qcow2 2025-07-06 20:37:31.964492 | orchestrator | 2025-07-06 20:37:31 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250706.qcow2 2025-07-06 20:37:31.964525 | orchestrator | 2025-07-06 20:37:31 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250706.qcow2.CHECKSUM 2025-07-06 20:37:31.983258 | orchestrator | 2025-07-06 20:37:31 | INFO  | checksum: e7dc90cac0c85815d1d7db62923debdc1ff8dd88fe2a46fd4546115b627650c4 2025-07-06 20:37:32.068007 | orchestrator | 2025-07-06 20:37:32 | INFO  | It takes a moment until task 666f64fe-1927-4b94-9a68-0ca995ecc6f9 (image-manager) has been started and output is visible here. 2025-07-06 20:38:32.965248 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-07-06 20:38:32.965371 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-07-06 20:38:32.965389 | orchestrator | 2025-07-06 20:37:34 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-07-06' 2025-07-06 20:38:32.965406 | orchestrator | 2025-07-06 20:37:34 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250706.qcow2: 200 2025-07-06 20:38:32.965420 | orchestrator | 2025-07-06 20:37:34 | INFO  | Importing image OpenStack Octavia Amphora 2025-07-06 2025-07-06 20:38:32.965432 | orchestrator | 2025-07-06 20:37:34 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250706.qcow2 2025-07-06 20:38:32.965445 | orchestrator | 2025-07-06 20:37:35 | INFO  | Waiting for image to leave queued state... 2025-07-06 20:38:32.965457 | orchestrator | 2025-07-06 20:37:37 | INFO  | Waiting for import to complete... 2025-07-06 20:38:32.965495 | orchestrator | 2025-07-06 20:37:47 | INFO  | Waiting for import to complete... 2025-07-06 20:38:32.965507 | orchestrator | 2025-07-06 20:37:57 | INFO  | Waiting for import to complete... 2025-07-06 20:38:32.965518 | orchestrator | 2025-07-06 20:38:07 | INFO  | Waiting for import to complete... 2025-07-06 20:38:32.965528 | orchestrator | 2025-07-06 20:38:18 | INFO  | Waiting for import to complete... 2025-07-06 20:38:32.965539 | orchestrator | 2025-07-06 20:38:28 | INFO  | Import of 'OpenStack Octavia Amphora 2025-07-06' successfully completed, reloading images 2025-07-06 20:38:32.965551 | orchestrator | 2025-07-06 20:38:28 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-07-06' 2025-07-06 20:38:32.965563 | orchestrator | 2025-07-06 20:38:28 | INFO  | Setting internal_version = 2025-07-06 2025-07-06 20:38:32.965573 | orchestrator | 2025-07-06 20:38:28 | INFO  | Setting image_original_user = ubuntu 2025-07-06 20:38:32.965584 | orchestrator | 2025-07-06 20:38:28 | INFO  | Adding tag amphora 2025-07-06 20:38:32.965640 | orchestrator | 2025-07-06 20:38:28 | INFO  | Adding tag os:ubuntu 2025-07-06 20:38:32.965654 | orchestrator | 2025-07-06 20:38:29 | INFO  | Setting property architecture: x86_64 2025-07-06 20:38:32.965665 | orchestrator | 2025-07-06 20:38:29 | INFO  | Setting property hw_disk_bus: scsi 2025-07-06 20:38:32.965675 | orchestrator | 2025-07-06 20:38:29 | INFO  | Setting property hw_rng_model: virtio 2025-07-06 20:38:32.965697 | orchestrator | 2025-07-06 20:38:29 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-06 20:38:32.965708 | orchestrator | 2025-07-06 20:38:30 | INFO  | Setting property hw_watchdog_action: reset 2025-07-06 20:38:32.965719 | orchestrator | 2025-07-06 20:38:30 | INFO  | Setting property hypervisor_type: qemu 2025-07-06 20:38:32.965730 | orchestrator | 2025-07-06 20:38:30 | INFO  | Setting property os_distro: ubuntu 2025-07-06 20:38:32.965743 | orchestrator | 2025-07-06 20:38:30 | INFO  | Setting property replace_frequency: quarterly 2025-07-06 20:38:32.965755 | orchestrator | 2025-07-06 20:38:30 | INFO  | Setting property uuid_validity: last-1 2025-07-06 20:38:32.965768 | orchestrator | 2025-07-06 20:38:31 | INFO  | Setting property provided_until: none 2025-07-06 20:38:32.965781 | orchestrator | 2025-07-06 20:38:31 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-07-06 20:38:32.965794 | orchestrator | 2025-07-06 20:38:31 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-07-06 20:38:32.965806 | orchestrator | 2025-07-06 20:38:31 | INFO  | Setting property internal_version: 2025-07-06 2025-07-06 20:38:32.965818 | orchestrator | 2025-07-06 20:38:31 | INFO  | Setting property image_original_user: ubuntu 2025-07-06 20:38:32.965831 | orchestrator | 2025-07-06 20:38:31 | INFO  | Setting property os_version: 2025-07-06 2025-07-06 20:38:32.965844 | orchestrator | 2025-07-06 20:38:32 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250706.qcow2 2025-07-06 20:38:32.965876 | orchestrator | 2025-07-06 20:38:32 | INFO  | Setting property image_build_date: 2025-07-06 2025-07-06 20:38:32.965889 | orchestrator | 2025-07-06 20:38:32 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-07-06' 2025-07-06 20:38:32.965901 | orchestrator | 2025-07-06 20:38:32 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-07-06' 2025-07-06 20:38:32.965914 | orchestrator | 2025-07-06 20:38:32 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-07-06 20:38:32.965935 | orchestrator | 2025-07-06 20:38:32 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-07-06 20:38:32.965948 | orchestrator | 2025-07-06 20:38:32 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-07-06 20:38:32.965961 | orchestrator | 2025-07-06 20:38:32 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-07-06 20:38:33.423459 | orchestrator | ok: Runtime: 0:03:10.703845 2025-07-06 20:38:33.452092 | 2025-07-06 20:38:33.452230 | TASK [Run checks] 2025-07-06 20:38:34.225411 | orchestrator | + set -e 2025-07-06 20:38:34.225556 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 20:38:34.225566 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 20:38:34.225575 | orchestrator | ++ INTERACTIVE=false 2025-07-06 20:38:34.225581 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 20:38:34.225585 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 20:38:34.225591 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-06 20:38:34.226817 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-06 20:38:34.233033 | orchestrator | 2025-07-06 20:38:34.233105 | orchestrator | # CHECK 2025-07-06 20:38:34.233120 | orchestrator | 2025-07-06 20:38:34.233133 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-06 20:38:34.233150 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-06 20:38:34.233162 | orchestrator | + echo 2025-07-06 20:38:34.233174 | orchestrator | + echo '# CHECK' 2025-07-06 20:38:34.233186 | orchestrator | + echo 2025-07-06 20:38:34.233203 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-06 20:38:34.234163 | orchestrator | ++ semver latest 5.0.0 2025-07-06 20:38:34.298128 | orchestrator | 2025-07-06 20:38:34.298236 | orchestrator | ## Containers @ testbed-manager 2025-07-06 20:38:34.298251 | orchestrator | 2025-07-06 20:38:34.298266 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-06 20:38:34.298277 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-06 20:38:34.298289 | orchestrator | + echo 2025-07-06 20:38:34.298301 | orchestrator | + echo '## Containers @ testbed-manager' 2025-07-06 20:38:34.298313 | orchestrator | + echo 2025-07-06 20:38:34.298324 | orchestrator | + osism container testbed-manager ps 2025-07-06 20:38:36.533011 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-06 20:38:36.533142 | orchestrator | c699a2a2bfe6 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-07-06 20:38:36.533167 | orchestrator | b4b8774b2b22 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-07-06 20:38:36.533180 | orchestrator | b56bb231fed8 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-07-06 20:38:36.533200 | orchestrator | 41d1bdc6965a registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-07-06 20:38:36.533212 | orchestrator | d5de45dc2f62 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-07-06 20:38:36.533228 | orchestrator | ac49455369de registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 18 minutes ago Up 17 minutes cephclient 2025-07-06 20:38:36.533240 | orchestrator | ceee230db5ff registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-07-06 20:38:36.533252 | orchestrator | 7dfc9d111a80 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-07-06 20:38:36.533264 | orchestrator | 49d680c6a098 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-07-06 20:38:36.533303 | orchestrator | 14434ce84d44 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-07-06 20:38:36.533315 | orchestrator | 403c1c57962c registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 31 minutes openstackclient 2025-07-06 20:38:36.533327 | orchestrator | 4c80b4f03f80 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 32 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-07-06 20:38:36.533338 | orchestrator | 3207a4e1184b registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 40 minutes ago Up 40 minutes (healthy) osism-kubernetes 2025-07-06 20:38:36.533349 | orchestrator | 69c709eb94c6 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 56 minutes ago Up 55 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-07-06 20:38:36.533367 | orchestrator | 846e0ba74fbe registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" About an hour ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2025-07-06 20:38:36.533400 | orchestrator | 06aff78b468a registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) osism-ansible 2025-07-06 20:38:36.533412 | orchestrator | 7e65e39d244d registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) kolla-ansible 2025-07-06 20:38:36.533424 | orchestrator | 6f58d204b6a2 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" About an hour ago Up 40 minutes (healthy) ceph-ansible 2025-07-06 20:38:36.533435 | orchestrator | 51a573bc24d3 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" About an hour ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2025-07-06 20:38:36.533446 | orchestrator | 7c4be25694c0 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2025-07-06 20:38:36.533457 | orchestrator | 012937b46ee6 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-openstack-1 2025-07-06 20:38:36.533469 | orchestrator | 8fad013d8840 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-beat-1 2025-07-06 20:38:36.533480 | orchestrator | a246d074deb1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" About an hour ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2025-07-06 20:38:36.533499 | orchestrator | 2854b4d0cd0d registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-listener-1 2025-07-06 20:38:36.533511 | orchestrator | 52da8882672c registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" About an hour ago Up 40 minutes (healthy) osismclient 2025-07-06 20:38:36.533522 | orchestrator | 8a70c5e496b4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) manager-flower-1 2025-07-06 20:38:36.533533 | orchestrator | 0ddb15874585 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" About an hour ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-07-06 20:38:36.533545 | orchestrator | b2472d3790c2 registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" About an hour ago Up About an hour (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-07-06 20:38:36.780473 | orchestrator | 2025-07-06 20:38:36.780573 | orchestrator | ## Images @ testbed-manager 2025-07-06 20:38:36.780588 | orchestrator | 2025-07-06 20:38:36.780646 | orchestrator | + echo 2025-07-06 20:38:36.780660 | orchestrator | + echo '## Images @ testbed-manager' 2025-07-06 20:38:36.780673 | orchestrator | + echo 2025-07-06 20:38:36.780684 | orchestrator | + osism container testbed-manager images 2025-07-06 20:38:38.833327 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-06 20:38:38.833483 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 75216d8812a2 About an hour ago 1.21GB 2025-07-06 20:38:38.833510 | orchestrator | registry.osism.tech/osism/osism-kubernetes a7bc3c2a1a38 4 hours ago 1.21GB 2025-07-06 20:38:38.833528 | orchestrator | registry.osism.tech/osism/homer v25.05.2 24de99a938e3 17 hours ago 11.5MB 2025-07-06 20:38:38.833545 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 beca3f9f79e6 17 hours ago 233MB 2025-07-06 20:38:38.833562 | orchestrator | registry.osism.tech/osism/cephclient reef f5964dd8a4b4 17 hours ago 453MB 2025-07-06 20:38:38.833579 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5bd29562eae5 19 hours ago 318MB 2025-07-06 20:38:38.833596 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d709f500305e 19 hours ago 746MB 2025-07-06 20:38:38.833651 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 29d9adf832be 19 hours ago 628MB 2025-07-06 20:38:38.833669 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9b38961b2d3c 19 hours ago 358MB 2025-07-06 20:38:38.833687 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 be8ff1ec4a38 19 hours ago 410MB 2025-07-06 20:38:38.833704 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 65c70b67eb9f 19 hours ago 360MB 2025-07-06 20:38:38.833722 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 1e9c9ed00055 19 hours ago 456MB 2025-07-06 20:38:38.833740 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 c8c9dc1902b9 19 hours ago 891MB 2025-07-06 20:38:38.833756 | orchestrator | registry.osism.tech/osism/osism-ansible latest 40ffcc9175f1 20 hours ago 575MB 2025-07-06 20:38:38.833773 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 03671c67178b 20 hours ago 571MB 2025-07-06 20:38:38.833791 | orchestrator | registry.osism.tech/osism/osism latest 97786b94b388 20 hours ago 310MB 2025-07-06 20:38:38.833841 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 2e5389d58192 20 hours ago 535MB 2025-07-06 20:38:38.833880 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 31b10c015f64 21 hours ago 307MB 2025-07-06 20:38:38.833899 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 10 days ago 226MB 2025-07-06 20:38:38.833916 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 7fb85a4198e9 3 weeks ago 329MB 2025-07-06 20:38:38.833934 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 5 weeks ago 41.4MB 2025-07-06 20:38:38.833951 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 5 months ago 571MB 2025-07-06 20:38:38.833968 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 10 months ago 300MB 2025-07-06 20:38:38.833985 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 13 months ago 146MB 2025-07-06 20:38:39.092826 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-06 20:38:39.093030 | orchestrator | ++ semver latest 5.0.0 2025-07-06 20:38:39.150593 | orchestrator | 2025-07-06 20:38:39.150734 | orchestrator | ## Containers @ testbed-node-0 2025-07-06 20:38:39.150750 | orchestrator | 2025-07-06 20:38:39.150763 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-06 20:38:39.150774 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-06 20:38:39.150786 | orchestrator | + echo 2025-07-06 20:38:39.150797 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-07-06 20:38:39.150809 | orchestrator | + echo 2025-07-06 20:38:39.150821 | orchestrator | + osism container testbed-node-0 ps 2025-07-06 20:38:41.425328 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-06 20:38:41.425457 | orchestrator | 5f5d0cbc21f7 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-06 20:38:41.425479 | orchestrator | 6109c11cd7b3 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-06 20:38:41.425524 | orchestrator | 966a22187f3a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-07-06 20:38:41.425537 | orchestrator | 3f6314ed1911 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-06 20:38:41.425548 | orchestrator | cedbf4a27c49 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes grafana 2025-07-06 20:38:41.425560 | orchestrator | bfb553ef0a87 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-07-06 20:38:41.425571 | orchestrator | d5ef5c775309 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-07-06 20:38:41.425582 | orchestrator | b3bc1cc0ff8c registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-07-06 20:38:41.425593 | orchestrator | 496dfe9eefe4 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-07-06 20:38:41.425648 | orchestrator | a94f776c0f02 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-07-06 20:38:41.425683 | orchestrator | c7c342b18e18 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) placement_api 2025-07-06 20:38:41.425696 | orchestrator | 24cb858259f2 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-06 20:38:41.425707 | orchestrator | b21361f2c20c registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-07-06 20:38:41.425719 | orchestrator | 47196823981b registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-07-06 20:38:41.425730 | orchestrator | c2d38035688e registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-07-06 20:38:41.425740 | orchestrator | bf69858a4a24 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-07-06 20:38:41.425751 | orchestrator | 199000c3a659 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-07-06 20:38:41.425762 | orchestrator | 1bb2be42d51f registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-07-06 20:38:41.425773 | orchestrator | 064450b17513 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-07-06 20:38:41.425784 | orchestrator | 5bde8891f5ad registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-07-06 20:38:41.425794 | orchestrator | 7cd245112d48 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-07-06 20:38:41.425840 | orchestrator | ab5e7478da23 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-07-06 20:38:41.425852 | orchestrator | 27fe5f2ceb09 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-07-06 20:38:41.425863 | orchestrator | 343866192dc2 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-07-06 20:38:41.425874 | orchestrator | de1b8c36883e registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-06 20:38:41.425885 | orchestrator | d28240b7e9b8 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-07-06 20:38:41.425896 | orchestrator | 91cbb0cabdad registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-07-06 20:38:41.425911 | orchestrator | 0aadb1c02633 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-07-06 20:38:41.425923 | orchestrator | b2ce1b997b8f registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-07-06 20:38:41.425934 | orchestrator | f0237f582919 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-06 20:38:41.425954 | orchestrator | aa8751e817e5 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-06 20:38:41.425965 | orchestrator | 9e6029038144 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-07-06 20:38:41.425982 | orchestrator | 8f57cb32b8e3 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-07-06 20:38:41.425993 | orchestrator | 9b9a035da258 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-07-06 20:38:41.426004 | orchestrator | a6b78a9801ca registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-07-06 20:38:41.426093 | orchestrator | 6b916f9a8410 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-07-06 20:38:41.426109 | orchestrator | 780c6305c7ff registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-06 20:38:41.426120 | orchestrator | fd01f71b3fa1 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-06 20:38:41.426131 | orchestrator | d5cd7d6e3689 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-07-06 20:38:41.426142 | orchestrator | 69988cfec8d1 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-07-06 20:38:41.426153 | orchestrator | 9ae33bc9d903 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-07-06 20:38:41.426164 | orchestrator | 2758d8101d83 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-06 20:38:41.426175 | orchestrator | e4b0660d883b registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-07-06 20:38:41.426187 | orchestrator | 28a6b0de3ec3 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-07-06 20:38:41.426208 | orchestrator | a6ada17fdd52 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-07-06 20:38:41.426227 | orchestrator | a56f379a5576 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-07-06 20:38:41.426239 | orchestrator | 251a4a51be5e registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-07-06 20:38:41.426250 | orchestrator | bba40f85afdc registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-07-06 20:38:41.426261 | orchestrator | 21b870988def registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-07-06 20:38:41.426272 | orchestrator | 8202d7f27cd3 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-06 20:38:41.426291 | orchestrator | 78686ba54729 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-07-06 20:38:41.426303 | orchestrator | f5e289b69194 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-07-06 20:38:41.690278 | orchestrator | 2025-07-06 20:38:41.690410 | orchestrator | ## Images @ testbed-node-0 2025-07-06 20:38:41.690426 | orchestrator | 2025-07-06 20:38:41.690438 | orchestrator | + echo 2025-07-06 20:38:41.690450 | orchestrator | + echo '## Images @ testbed-node-0' 2025-07-06 20:38:41.690462 | orchestrator | + echo 2025-07-06 20:38:41.690474 | orchestrator | + osism container testbed-node-0 images 2025-07-06 20:38:43.865547 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-06 20:38:43.865703 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 03848d10742f 17 hours ago 1.27GB 2025-07-06 20:38:43.865720 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 b82e7e88101e 19 hours ago 417MB 2025-07-06 20:38:43.865732 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5bd29562eae5 19 hours ago 318MB 2025-07-06 20:38:43.865743 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 0ca5de23172e 19 hours ago 375MB 2025-07-06 20:38:43.865754 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d709f500305e 19 hours ago 746MB 2025-07-06 20:38:43.865766 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 d02d6940a8b5 19 hours ago 1.01GB 2025-07-06 20:38:43.865777 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 1105a60c1719 19 hours ago 329MB 2025-07-06 20:38:43.865788 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 29d9adf832be 19 hours ago 628MB 2025-07-06 20:38:43.865823 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 e9a720eba69a 19 hours ago 1.59GB 2025-07-06 20:38:43.865835 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 185adae9c4c8 19 hours ago 1.55GB 2025-07-06 20:38:43.865846 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 54a2615f6376 19 hours ago 326MB 2025-07-06 20:38:43.865860 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 4e96396aae5f 19 hours ago 318MB 2025-07-06 20:38:43.865871 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 c1d58b373410 19 hours ago 590MB 2025-07-06 20:38:43.865882 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0bc426a50b1c 19 hours ago 361MB 2025-07-06 20:38:43.865893 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 a467d36db85c 19 hours ago 361MB 2025-07-06 20:38:43.865905 | orchestrator | registry.osism.tech/kolla/redis 2024.2 6636df81d4fd 19 hours ago 324MB 2025-07-06 20:38:43.865917 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 00ca7d03d45e 19 hours ago 324MB 2025-07-06 20:38:43.865928 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 9b012ab08198 19 hours ago 1.21GB 2025-07-06 20:38:43.865939 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9b38961b2d3c 19 hours ago 358MB 2025-07-06 20:38:43.865950 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 c5c50682919b 19 hours ago 353MB 2025-07-06 20:38:43.865961 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 5d9e06361c58 19 hours ago 351MB 2025-07-06 20:38:43.865972 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 d39d3669b97e 19 hours ago 344MB 2025-07-06 20:38:43.865983 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 be8ff1ec4a38 19 hours ago 410MB 2025-07-06 20:38:43.866117 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 b03520a86bdf 19 hours ago 946MB 2025-07-06 20:38:43.866136 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 b4874fc13311 19 hours ago 947MB 2025-07-06 20:38:43.866149 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 0640be510c4b 19 hours ago 947MB 2025-07-06 20:38:43.866161 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c53d82c33a2f 19 hours ago 946MB 2025-07-06 20:38:43.866174 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 3a0e92590974 19 hours ago 1.11GB 2025-07-06 20:38:43.866186 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 e59cd97a71c0 19 hours ago 1.11GB 2025-07-06 20:38:43.866199 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 a1ff21d56e36 19 hours ago 1.13GB 2025-07-06 20:38:43.866212 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e49cf7b60e84 19 hours ago 1.11GB 2025-07-06 20:38:43.866224 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 240d03c6f462 19 hours ago 1.11GB 2025-07-06 20:38:43.866237 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 4646ebc79da1 19 hours ago 1.42GB 2025-07-06 20:38:43.866249 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2ef568d403d5 19 hours ago 1.29GB 2025-07-06 20:38:43.866262 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 814e7c91fc45 19 hours ago 1.29GB 2025-07-06 20:38:43.866274 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 2ea867708372 19 hours ago 1.29GB 2025-07-06 20:38:43.866318 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 726425074588 19 hours ago 1.1GB 2025-07-06 20:38:43.866342 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 b25e7c81e55c 19 hours ago 1.1GB 2025-07-06 20:38:43.866356 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 69696392e6bc 19 hours ago 1.12GB 2025-07-06 20:38:43.866368 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 25d72c47ed12 19 hours ago 1.1GB 2025-07-06 20:38:43.866381 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 66eb10b1230c 19 hours ago 1.12GB 2025-07-06 20:38:43.866394 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 f30e4d711b36 19 hours ago 1.05GB 2025-07-06 20:38:43.866407 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 fd777810b3f3 19 hours ago 1.05GB 2025-07-06 20:38:43.866419 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 ab26fc9e7d9a 19 hours ago 1.06GB 2025-07-06 20:38:43.866432 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 eae75800b099 19 hours ago 1.06GB 2025-07-06 20:38:43.866442 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 9abbfc061f11 19 hours ago 1.05GB 2025-07-06 20:38:43.866453 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 bcfa88ab28eb 19 hours ago 1.05GB 2025-07-06 20:38:43.866464 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 2db0bef4ca19 19 hours ago 1.15GB 2025-07-06 20:38:43.866475 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 a570a268a3de 19 hours ago 1.04GB 2025-07-06 20:38:43.866485 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 2282bf924baf 19 hours ago 1.04GB 2025-07-06 20:38:43.866496 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 45b1a9e0ad1c 19 hours ago 1.04GB 2025-07-06 20:38:43.866507 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 088cac2d00f4 19 hours ago 1.04GB 2025-07-06 20:38:43.866528 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 d846eb049269 19 hours ago 1.24GB 2025-07-06 20:38:43.866539 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 59d99b60450c 19 hours ago 1.04GB 2025-07-06 20:38:43.866550 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 ec3d7074f123 19 hours ago 1.41GB 2025-07-06 20:38:43.866561 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 d0d866142daa 19 hours ago 1.41GB 2025-07-06 20:38:43.866571 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 3ac1011589b1 19 hours ago 1.31GB 2025-07-06 20:38:43.866582 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 1d3921482b32 19 hours ago 1.2GB 2025-07-06 20:38:43.866593 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 57b1f553e204 19 hours ago 1.04GB 2025-07-06 20:38:43.866628 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 f9d7e446d73c 19 hours ago 1.04GB 2025-07-06 20:38:43.866639 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c6eb1d2fd918 19 hours ago 1.06GB 2025-07-06 20:38:43.866650 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 f8357083a505 19 hours ago 1.06GB 2025-07-06 20:38:43.866661 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 f271acd602de 19 hours ago 1.06GB 2025-07-06 20:38:44.156588 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-06 20:38:44.157199 | orchestrator | ++ semver latest 5.0.0 2025-07-06 20:38:44.209561 | orchestrator | 2025-07-06 20:38:44.209658 | orchestrator | ## Containers @ testbed-node-1 2025-07-06 20:38:44.209669 | orchestrator | 2025-07-06 20:38:44.209676 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-06 20:38:44.209683 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-06 20:38:44.209689 | orchestrator | + echo 2025-07-06 20:38:44.209696 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-07-06 20:38:44.209704 | orchestrator | + echo 2025-07-06 20:38:44.209711 | orchestrator | + osism container testbed-node-1 ps 2025-07-06 20:38:46.421482 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-06 20:38:46.421717 | orchestrator | 821f39ce4e79 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-06 20:38:46.421742 | orchestrator | 706ee2a41f63 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-06 20:38:46.421753 | orchestrator | b5aa19db4246 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-07-06 20:38:46.421765 | orchestrator | 466fb8be260e registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-06 20:38:46.421776 | orchestrator | f2e326290fed registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-07-06 20:38:46.421787 | orchestrator | 65ad9dbc239e registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-07-06 20:38:46.421798 | orchestrator | 4846aae3ac90 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-07-06 20:38:46.421810 | orchestrator | a3a5ca4e7e66 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_api 2025-07-06 20:38:46.421821 | orchestrator | f660e5a5cf4d registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-07-06 20:38:46.421851 | orchestrator | bc276b9eb87d registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-07-06 20:38:46.421863 | orchestrator | 3b7aff909c9f registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) placement_api 2025-07-06 20:38:46.421874 | orchestrator | 9a9d031e084c registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-06 20:38:46.421885 | orchestrator | 345cd3724d6b registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-07-06 20:38:46.421897 | orchestrator | b92ed3d8ec5c registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-07-06 20:38:46.421909 | orchestrator | 35aeea847288 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-07-06 20:38:46.421919 | orchestrator | d5a42a88c7c7 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-07-06 20:38:46.421931 | orchestrator | 0e472e9e7e77 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-07-06 20:38:46.421942 | orchestrator | 64799419ada0 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-07-06 20:38:46.421953 | orchestrator | 37e1b4171edb registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-07-06 20:38:46.421978 | orchestrator | 80b47ba362e5 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-07-06 20:38:46.421990 | orchestrator | 05db492f04b4 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-07-06 20:38:46.422074 | orchestrator | a106267c23f3 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-07-06 20:38:46.422093 | orchestrator | 2660ca9a12f4 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-07-06 20:38:46.422106 | orchestrator | 5582df0dc5aa registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-06 20:38:46.422365 | orchestrator | ff85b1cb5a38 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-06 20:38:46.422459 | orchestrator | cbb8951fdd3f registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-07-06 20:38:46.422474 | orchestrator | f1e48328c9ca registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-07-06 20:38:46.422485 | orchestrator | c4e759d0543f registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-07-06 20:38:46.422515 | orchestrator | 105aade8b2d7 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) horizon 2025-07-06 20:38:46.422525 | orchestrator | aff9c9ab08c8 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-06 20:38:46.422535 | orchestrator | 2657c6abc5d4 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-06 20:38:46.422545 | orchestrator | c6c65e4ecc4e registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-06 20:38:46.422555 | orchestrator | 70a9f96491f4 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-07-06 20:38:46.422565 | orchestrator | 8c53ebf88057 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-07-06 20:38:46.422574 | orchestrator | bde83d1f2704 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-07-06 20:38:46.422584 | orchestrator | cb97b3d82b49 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-07-06 20:38:46.422594 | orchestrator | 8b34dfa59c62 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-06 20:38:46.422642 | orchestrator | 31a23927f9ae registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-06 20:38:46.422653 | orchestrator | e4fcc752def8 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-07-06 20:38:46.422663 | orchestrator | d58c460353e4 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-07-06 20:38:46.422673 | orchestrator | 6552676d2636 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-07-06 20:38:46.422683 | orchestrator | abcfc9ff8db4 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-07-06 20:38:46.422708 | orchestrator | bed761e3b11b registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-07-06 20:38:46.422719 | orchestrator | 8d8cc0f31518 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-07-06 20:38:46.422729 | orchestrator | a6200a1ce1c5 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-07-06 20:38:46.422739 | orchestrator | 5ce7e16d9d52 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-07-06 20:38:46.422748 | orchestrator | 298a3d96020d registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-07-06 20:38:46.422775 | orchestrator | 00de2a520d7e registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-07-06 20:38:46.422793 | orchestrator | 30ef4e234409 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-07-06 20:38:46.422803 | orchestrator | 4677cc29fae3 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-06 20:38:46.422813 | orchestrator | 6ed6ef6c5bdc registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-07-06 20:38:46.422823 | orchestrator | 282808f832b2 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-07-06 20:38:46.679859 | orchestrator | 2025-07-06 20:38:46.679957 | orchestrator | ## Images @ testbed-node-1 2025-07-06 20:38:46.679974 | orchestrator | 2025-07-06 20:38:46.679986 | orchestrator | + echo 2025-07-06 20:38:46.679998 | orchestrator | + echo '## Images @ testbed-node-1' 2025-07-06 20:38:46.680011 | orchestrator | + echo 2025-07-06 20:38:46.680023 | orchestrator | + osism container testbed-node-1 images 2025-07-06 20:38:48.877231 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-06 20:38:48.877319 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 03848d10742f 17 hours ago 1.27GB 2025-07-06 20:38:48.877330 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 b82e7e88101e 19 hours ago 417MB 2025-07-06 20:38:48.877338 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5bd29562eae5 19 hours ago 318MB 2025-07-06 20:38:48.877347 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 0ca5de23172e 19 hours ago 375MB 2025-07-06 20:38:48.877355 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d709f500305e 19 hours ago 746MB 2025-07-06 20:38:48.877363 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 d02d6940a8b5 19 hours ago 1.01GB 2025-07-06 20:38:48.877371 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 1105a60c1719 19 hours ago 329MB 2025-07-06 20:38:48.877379 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 29d9adf832be 19 hours ago 628MB 2025-07-06 20:38:48.877387 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 e9a720eba69a 19 hours ago 1.59GB 2025-07-06 20:38:48.877395 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 185adae9c4c8 19 hours ago 1.55GB 2025-07-06 20:38:48.877403 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 54a2615f6376 19 hours ago 326MB 2025-07-06 20:38:48.877411 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 4e96396aae5f 19 hours ago 318MB 2025-07-06 20:38:48.877419 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 c1d58b373410 19 hours ago 590MB 2025-07-06 20:38:48.877427 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0bc426a50b1c 19 hours ago 361MB 2025-07-06 20:38:48.877435 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 a467d36db85c 19 hours ago 361MB 2025-07-06 20:38:48.877443 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 00ca7d03d45e 19 hours ago 324MB 2025-07-06 20:38:48.877451 | orchestrator | registry.osism.tech/kolla/redis 2024.2 6636df81d4fd 19 hours ago 324MB 2025-07-06 20:38:48.877460 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 9b012ab08198 19 hours ago 1.21GB 2025-07-06 20:38:48.877467 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9b38961b2d3c 19 hours ago 358MB 2025-07-06 20:38:48.877475 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 c5c50682919b 19 hours ago 353MB 2025-07-06 20:38:48.877483 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 5d9e06361c58 19 hours ago 351MB 2025-07-06 20:38:48.877512 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 d39d3669b97e 19 hours ago 344MB 2025-07-06 20:38:48.877520 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 be8ff1ec4a38 19 hours ago 410MB 2025-07-06 20:38:48.877528 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 b03520a86bdf 19 hours ago 946MB 2025-07-06 20:38:48.877550 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 b4874fc13311 19 hours ago 947MB 2025-07-06 20:38:48.877558 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 0640be510c4b 19 hours ago 947MB 2025-07-06 20:38:48.877569 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c53d82c33a2f 19 hours ago 946MB 2025-07-06 20:38:48.877583 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 a1ff21d56e36 19 hours ago 1.13GB 2025-07-06 20:38:48.877591 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e49cf7b60e84 19 hours ago 1.11GB 2025-07-06 20:38:48.877599 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 240d03c6f462 19 hours ago 1.11GB 2025-07-06 20:38:48.877667 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 4646ebc79da1 19 hours ago 1.42GB 2025-07-06 20:38:48.877677 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2ef568d403d5 19 hours ago 1.29GB 2025-07-06 20:38:48.877685 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 814e7c91fc45 19 hours ago 1.29GB 2025-07-06 20:38:48.877693 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 2ea867708372 19 hours ago 1.29GB 2025-07-06 20:38:48.877701 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 f30e4d711b36 19 hours ago 1.05GB 2025-07-06 20:38:48.877709 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 fd777810b3f3 19 hours ago 1.05GB 2025-07-06 20:38:48.877733 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 ab26fc9e7d9a 19 hours ago 1.06GB 2025-07-06 20:38:48.877741 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 eae75800b099 19 hours ago 1.06GB 2025-07-06 20:38:48.877749 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 9abbfc061f11 19 hours ago 1.05GB 2025-07-06 20:38:48.877757 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 bcfa88ab28eb 19 hours ago 1.05GB 2025-07-06 20:38:48.877765 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 2db0bef4ca19 19 hours ago 1.15GB 2025-07-06 20:38:48.877773 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 d846eb049269 19 hours ago 1.24GB 2025-07-06 20:38:48.877782 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 59d99b60450c 19 hours ago 1.04GB 2025-07-06 20:38:48.877791 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 ec3d7074f123 19 hours ago 1.41GB 2025-07-06 20:38:48.877800 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 d0d866142daa 19 hours ago 1.41GB 2025-07-06 20:38:48.877809 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 3ac1011589b1 19 hours ago 1.31GB 2025-07-06 20:38:48.877818 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 1d3921482b32 19 hours ago 1.2GB 2025-07-06 20:38:48.877827 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c6eb1d2fd918 19 hours ago 1.06GB 2025-07-06 20:38:48.877836 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 f8357083a505 19 hours ago 1.06GB 2025-07-06 20:38:48.877846 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 f271acd602de 19 hours ago 1.06GB 2025-07-06 20:38:49.156259 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-06 20:38:49.157399 | orchestrator | ++ semver latest 5.0.0 2025-07-06 20:38:49.219073 | orchestrator | 2025-07-06 20:38:49.219164 | orchestrator | ## Containers @ testbed-node-2 2025-07-06 20:38:49.219179 | orchestrator | 2025-07-06 20:38:49.219191 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-06 20:38:49.219202 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-06 20:38:49.219213 | orchestrator | + echo 2025-07-06 20:38:49.219225 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-07-06 20:38:49.219237 | orchestrator | + echo 2025-07-06 20:38:49.219249 | orchestrator | + osism container testbed-node-2 ps 2025-07-06 20:38:51.414916 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-06 20:38:51.415032 | orchestrator | ddcb293d4c5b registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-07-06 20:38:51.415050 | orchestrator | 04ea52fd3606 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-07-06 20:38:51.415063 | orchestrator | 2f03de369654 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_api 2025-07-06 20:38:51.415825 | orchestrator | 11bd2aecdace registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-06 20:38:51.415856 | orchestrator | 970716652abf registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-07-06 20:38:51.415868 | orchestrator | c1fbcb0d3e38 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-07-06 20:38:51.415879 | orchestrator | 85b110cde82f registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-07-06 20:38:51.415891 | orchestrator | 8f2b5908a298 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) cinder_api 2025-07-06 20:38:51.415902 | orchestrator | c34f8a3b91a6 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-07-06 20:38:51.415913 | orchestrator | 680359dfe5ae registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-07-06 20:38:51.415944 | orchestrator | 17fd42f398ce registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) placement_api 2025-07-06 20:38:51.415955 | orchestrator | 27c2e0aa8408 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-07-06 20:38:51.415966 | orchestrator | 56391e89c9c1 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-07-06 20:38:51.415983 | orchestrator | e987f1e6d027 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-07-06 20:38:51.415995 | orchestrator | 5aa77e69e2d2 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-07-06 20:38:51.416006 | orchestrator | 3623f0c3ef88 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-07-06 20:38:51.416036 | orchestrator | 512003f69988 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-07-06 20:38:51.416048 | orchestrator | bc4b448b1c04 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-07-06 20:38:51.416059 | orchestrator | 4445cdcc76f3 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-07-06 20:38:51.416069 | orchestrator | 2e05ab181c44 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-07-06 20:38:51.416080 | orchestrator | 25eadb438311 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-07-06 20:38:51.416142 | orchestrator | b70c56f19a80 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-07-06 20:38:51.416156 | orchestrator | f1b0e5f4920d registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-07-06 20:38:51.416168 | orchestrator | 62ad9a66b704 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-07-06 20:38:51.416179 | orchestrator | 0fcba102a714 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-07-06 20:38:51.416221 | orchestrator | 03fcbb266e44 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-07-06 20:38:51.416248 | orchestrator | 10212cb9953f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-07-06 20:38:51.416260 | orchestrator | 56e40bd6aec2 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-07-06 20:38:51.416271 | orchestrator | 74c7ce4d4e11 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-06 20:38:51.416683 | orchestrator | aaee1782a068 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-06 20:38:51.416781 | orchestrator | a520576a20dc registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-06 20:38:51.416797 | orchestrator | 6f0aa51edda6 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-06 20:38:51.416808 | orchestrator | f8383c7c60b2 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-07-06 20:38:51.416820 | orchestrator | aec56b8e4c70 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-07-06 20:38:51.416830 | orchestrator | ac67ba43f5b8 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-07-06 20:38:51.416841 | orchestrator | d505d0c76f06 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 23 minutes keepalived 2025-07-06 20:38:51.416875 | orchestrator | e0c2730f0932 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-06 20:38:51.416932 | orchestrator | fc7a8749f6f7 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-06 20:38:51.416945 | orchestrator | 47f5638ead75 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-07-06 20:38:51.416956 | orchestrator | 9072c20cdc71 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-07-06 20:38:51.416967 | orchestrator | 3c40274862b7 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-07-06 20:38:51.416978 | orchestrator | 17401b054f44 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-07-06 20:38:51.417006 | orchestrator | 0b0296a44743 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-07-06 20:38:51.417018 | orchestrator | 4fcf94228996 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-07-06 20:38:51.417029 | orchestrator | f688fb547578 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-06 20:38:51.417113 | orchestrator | 8640e5259863 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-07-06 20:38:51.417128 | orchestrator | 921d65e0bab1 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-07-06 20:38:51.417139 | orchestrator | a8a95505bc4d registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-07-06 20:38:51.417151 | orchestrator | 49ed90e2884c registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-07-06 20:38:51.417162 | orchestrator | 03421988430d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-06 20:38:51.417173 | orchestrator | b6b4a3c099b6 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-07-06 20:38:51.417184 | orchestrator | 6e56468c5da3 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-07-06 20:38:51.689837 | orchestrator | 2025-07-06 20:38:51.689940 | orchestrator | ## Images @ testbed-node-2 2025-07-06 20:38:51.689955 | orchestrator | 2025-07-06 20:38:51.689968 | orchestrator | + echo 2025-07-06 20:38:51.689980 | orchestrator | + echo '## Images @ testbed-node-2' 2025-07-06 20:38:51.689992 | orchestrator | + echo 2025-07-06 20:38:51.690003 | orchestrator | + osism container testbed-node-2 images 2025-07-06 20:38:53.886291 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-06 20:38:53.886407 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 03848d10742f 17 hours ago 1.27GB 2025-07-06 20:38:53.886433 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 b82e7e88101e 19 hours ago 417MB 2025-07-06 20:38:53.886479 | orchestrator | registry.osism.tech/kolla/cron 2024.2 5bd29562eae5 19 hours ago 318MB 2025-07-06 20:38:53.886492 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 0ca5de23172e 19 hours ago 375MB 2025-07-06 20:38:53.886504 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d709f500305e 19 hours ago 746MB 2025-07-06 20:38:53.886515 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 d02d6940a8b5 19 hours ago 1.01GB 2025-07-06 20:38:53.886526 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 1105a60c1719 19 hours ago 329MB 2025-07-06 20:38:53.886537 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 29d9adf832be 19 hours ago 628MB 2025-07-06 20:38:53.886548 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 e9a720eba69a 19 hours ago 1.59GB 2025-07-06 20:38:53.886559 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 185adae9c4c8 19 hours ago 1.55GB 2025-07-06 20:38:53.886571 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 54a2615f6376 19 hours ago 326MB 2025-07-06 20:38:53.886582 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 4e96396aae5f 19 hours ago 318MB 2025-07-06 20:38:53.886594 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 c1d58b373410 19 hours ago 590MB 2025-07-06 20:38:53.886604 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 0bc426a50b1c 19 hours ago 361MB 2025-07-06 20:38:53.886701 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 a467d36db85c 19 hours ago 361MB 2025-07-06 20:38:53.886713 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 00ca7d03d45e 19 hours ago 324MB 2025-07-06 20:38:53.886724 | orchestrator | registry.osism.tech/kolla/redis 2024.2 6636df81d4fd 19 hours ago 324MB 2025-07-06 20:38:53.886735 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 9b012ab08198 19 hours ago 1.21GB 2025-07-06 20:38:53.886746 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 9b38961b2d3c 19 hours ago 358MB 2025-07-06 20:38:53.886757 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 c5c50682919b 19 hours ago 353MB 2025-07-06 20:38:53.886767 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 5d9e06361c58 19 hours ago 351MB 2025-07-06 20:38:53.886778 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 d39d3669b97e 19 hours ago 344MB 2025-07-06 20:38:53.886789 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 be8ff1ec4a38 19 hours ago 410MB 2025-07-06 20:38:53.886800 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 b03520a86bdf 19 hours ago 946MB 2025-07-06 20:38:53.886811 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 b4874fc13311 19 hours ago 947MB 2025-07-06 20:38:53.886822 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 0640be510c4b 19 hours ago 947MB 2025-07-06 20:38:53.886833 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c53d82c33a2f 19 hours ago 946MB 2025-07-06 20:38:53.886843 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 a1ff21d56e36 19 hours ago 1.13GB 2025-07-06 20:38:53.886854 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 e49cf7b60e84 19 hours ago 1.11GB 2025-07-06 20:38:53.886865 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 240d03c6f462 19 hours ago 1.11GB 2025-07-06 20:38:53.886876 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 4646ebc79da1 19 hours ago 1.42GB 2025-07-06 20:38:53.886887 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 2ef568d403d5 19 hours ago 1.29GB 2025-07-06 20:38:53.886907 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 814e7c91fc45 19 hours ago 1.29GB 2025-07-06 20:38:53.886918 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 2ea867708372 19 hours ago 1.29GB 2025-07-06 20:38:53.886929 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 f30e4d711b36 19 hours ago 1.05GB 2025-07-06 20:38:53.886940 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 fd777810b3f3 19 hours ago 1.05GB 2025-07-06 20:38:53.886970 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 ab26fc9e7d9a 19 hours ago 1.06GB 2025-07-06 20:38:53.886982 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 eae75800b099 19 hours ago 1.06GB 2025-07-06 20:38:53.886993 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 9abbfc061f11 19 hours ago 1.05GB 2025-07-06 20:38:53.887004 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 bcfa88ab28eb 19 hours ago 1.05GB 2025-07-06 20:38:53.887015 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 2db0bef4ca19 19 hours ago 1.15GB 2025-07-06 20:38:53.887026 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 d846eb049269 19 hours ago 1.24GB 2025-07-06 20:38:53.887054 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 59d99b60450c 19 hours ago 1.04GB 2025-07-06 20:38:53.887066 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 ec3d7074f123 19 hours ago 1.41GB 2025-07-06 20:38:53.887077 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 d0d866142daa 19 hours ago 1.41GB 2025-07-06 20:38:53.887088 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 3ac1011589b1 19 hours ago 1.31GB 2025-07-06 20:38:53.887099 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 1d3921482b32 19 hours ago 1.2GB 2025-07-06 20:38:53.887115 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 c6eb1d2fd918 19 hours ago 1.06GB 2025-07-06 20:38:53.887126 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 f8357083a505 19 hours ago 1.06GB 2025-07-06 20:38:53.887137 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 f271acd602de 19 hours ago 1.06GB 2025-07-06 20:38:54.194887 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-07-06 20:38:54.204312 | orchestrator | + set -e 2025-07-06 20:38:54.204393 | orchestrator | + source /opt/manager-vars.sh 2025-07-06 20:38:54.205134 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-06 20:38:54.205153 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-06 20:38:54.205209 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-06 20:38:54.205218 | orchestrator | ++ CEPH_VERSION=reef 2025-07-06 20:38:54.205228 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-06 20:38:54.205238 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-06 20:38:54.205247 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-06 20:38:54.205256 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-06 20:38:54.205264 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-06 20:38:54.205294 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-06 20:38:54.205305 | orchestrator | ++ export ARA=false 2025-07-06 20:38:54.205314 | orchestrator | ++ ARA=false 2025-07-06 20:38:54.205323 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-06 20:38:54.205332 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-06 20:38:54.205340 | orchestrator | ++ export TEMPEST=false 2025-07-06 20:38:54.205349 | orchestrator | ++ TEMPEST=false 2025-07-06 20:38:54.205457 | orchestrator | ++ export IS_ZUUL=true 2025-07-06 20:38:54.205468 | orchestrator | ++ IS_ZUUL=true 2025-07-06 20:38:54.205477 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 20:38:54.205486 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 20:38:54.205495 | orchestrator | ++ export EXTERNAL_API=false 2025-07-06 20:38:54.205504 | orchestrator | ++ EXTERNAL_API=false 2025-07-06 20:38:54.205513 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-06 20:38:54.205521 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-06 20:38:54.205554 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-06 20:38:54.205563 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-06 20:38:54.205572 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-06 20:38:54.205581 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-06 20:38:54.205590 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-06 20:38:54.205598 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-07-06 20:38:54.212009 | orchestrator | + set -e 2025-07-06 20:38:54.212052 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 20:38:54.212063 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 20:38:54.212074 | orchestrator | ++ INTERACTIVE=false 2025-07-06 20:38:54.212084 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 20:38:54.212094 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 20:38:54.212104 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-06 20:38:54.212751 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-06 20:38:54.218684 | orchestrator | 2025-07-06 20:38:54.218738 | orchestrator | # Ceph status 2025-07-06 20:38:54.218758 | orchestrator | 2025-07-06 20:38:54.218777 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-06 20:38:54.218794 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-06 20:38:54.218811 | orchestrator | + echo 2025-07-06 20:38:54.218822 | orchestrator | + echo '# Ceph status' 2025-07-06 20:38:54.218831 | orchestrator | + echo 2025-07-06 20:38:54.218841 | orchestrator | + ceph -s 2025-07-06 20:38:54.805124 | orchestrator | cluster: 2025-07-06 20:38:54.805230 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-07-06 20:38:54.805247 | orchestrator | health: HEALTH_OK 2025-07-06 20:38:54.805259 | orchestrator | 2025-07-06 20:38:54.805270 | orchestrator | services: 2025-07-06 20:38:54.805282 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-07-06 20:38:54.805295 | orchestrator | mgr: testbed-node-0(active, since 16m), standbys: testbed-node-1, testbed-node-2 2025-07-06 20:38:54.805307 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-07-06 20:38:54.805355 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-07-06 20:38:54.805368 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-07-06 20:38:54.805379 | orchestrator | 2025-07-06 20:38:54.805391 | orchestrator | data: 2025-07-06 20:38:54.805402 | orchestrator | volumes: 1/1 healthy 2025-07-06 20:38:54.805414 | orchestrator | pools: 14 pools, 401 pgs 2025-07-06 20:38:54.805425 | orchestrator | objects: 524 objects, 2.2 GiB 2025-07-06 20:38:54.805436 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-07-06 20:38:54.805447 | orchestrator | pgs: 401 active+clean 2025-07-06 20:38:54.805458 | orchestrator | 2025-07-06 20:38:54.846212 | orchestrator | 2025-07-06 20:38:54.846284 | orchestrator | # Ceph versions 2025-07-06 20:38:54.846291 | orchestrator | 2025-07-06 20:38:54.846296 | orchestrator | + echo 2025-07-06 20:38:54.846301 | orchestrator | + echo '# Ceph versions' 2025-07-06 20:38:54.846306 | orchestrator | + echo 2025-07-06 20:38:54.846311 | orchestrator | + ceph versions 2025-07-06 20:38:55.405139 | orchestrator | { 2025-07-06 20:38:55.405210 | orchestrator | "mon": { 2025-07-06 20:38:55.405216 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-06 20:38:55.405221 | orchestrator | }, 2025-07-06 20:38:55.405226 | orchestrator | "mgr": { 2025-07-06 20:38:55.405232 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-06 20:38:55.405239 | orchestrator | }, 2025-07-06 20:38:55.405245 | orchestrator | "osd": { 2025-07-06 20:38:55.405251 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-07-06 20:38:55.405257 | orchestrator | }, 2025-07-06 20:38:55.405263 | orchestrator | "mds": { 2025-07-06 20:38:55.405269 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-06 20:38:55.405275 | orchestrator | }, 2025-07-06 20:38:55.405282 | orchestrator | "rgw": { 2025-07-06 20:38:55.405288 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-06 20:38:55.405292 | orchestrator | }, 2025-07-06 20:38:55.405296 | orchestrator | "overall": { 2025-07-06 20:38:55.405300 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-07-06 20:38:55.405304 | orchestrator | } 2025-07-06 20:38:55.405308 | orchestrator | } 2025-07-06 20:38:55.457605 | orchestrator | 2025-07-06 20:38:55.457732 | orchestrator | # Ceph OSD tree 2025-07-06 20:38:55.457746 | orchestrator | 2025-07-06 20:38:55.457759 | orchestrator | + echo 2025-07-06 20:38:55.457800 | orchestrator | + echo '# Ceph OSD tree' 2025-07-06 20:38:55.457812 | orchestrator | + echo 2025-07-06 20:38:55.457823 | orchestrator | + ceph osd df tree 2025-07-06 20:38:55.959716 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-07-06 20:38:55.959838 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-07-06 20:38:55.959853 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-07-06 20:38:55.959865 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.56 0.94 189 up osd.0 2025-07-06 20:38:55.959876 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.28 1.06 201 up osd.3 2025-07-06 20:38:55.959887 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-07-06 20:38:55.959898 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.58 1.11 198 up osd.2 2025-07-06 20:38:55.959909 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1003 MiB 1 KiB 74 MiB 19 GiB 5.26 0.89 190 up osd.4 2025-07-06 20:38:55.959920 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-07-06 20:38:55.959932 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.64 0.95 195 up osd.1 2025-07-06 20:38:55.959943 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.20 1.05 197 up osd.5 2025-07-06 20:38:55.959954 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-07-06 20:38:55.959965 | orchestrator | MIN/MAX VAR: 0.89/1.11 STDDEV: 0.46 2025-07-06 20:38:56.011468 | orchestrator | 2025-07-06 20:38:56.011555 | orchestrator | # Ceph monitor status 2025-07-06 20:38:56.011569 | orchestrator | 2025-07-06 20:38:56.011581 | orchestrator | + echo 2025-07-06 20:38:56.011593 | orchestrator | + echo '# Ceph monitor status' 2025-07-06 20:38:56.011604 | orchestrator | + echo 2025-07-06 20:38:56.011641 | orchestrator | + ceph mon stat 2025-07-06 20:38:56.588592 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-07-06 20:38:56.635086 | orchestrator | 2025-07-06 20:38:56.635184 | orchestrator | # Ceph quorum status 2025-07-06 20:38:56.635201 | orchestrator | 2025-07-06 20:38:56.635214 | orchestrator | + echo 2025-07-06 20:38:56.635226 | orchestrator | + echo '# Ceph quorum status' 2025-07-06 20:38:56.635240 | orchestrator | + echo 2025-07-06 20:38:56.635323 | orchestrator | + ceph quorum_status 2025-07-06 20:38:56.635584 | orchestrator | + jq 2025-07-06 20:38:57.286996 | orchestrator | { 2025-07-06 20:38:57.287103 | orchestrator | "election_epoch": 8, 2025-07-06 20:38:57.287120 | orchestrator | "quorum": [ 2025-07-06 20:38:57.287132 | orchestrator | 0, 2025-07-06 20:38:57.287144 | orchestrator | 1, 2025-07-06 20:38:57.287156 | orchestrator | 2 2025-07-06 20:38:57.287167 | orchestrator | ], 2025-07-06 20:38:57.287179 | orchestrator | "quorum_names": [ 2025-07-06 20:38:57.287190 | orchestrator | "testbed-node-0", 2025-07-06 20:38:57.287202 | orchestrator | "testbed-node-1", 2025-07-06 20:38:57.287213 | orchestrator | "testbed-node-2" 2025-07-06 20:38:57.287277 | orchestrator | ], 2025-07-06 20:38:57.287292 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-07-06 20:38:57.287305 | orchestrator | "quorum_age": 1702, 2025-07-06 20:38:57.287316 | orchestrator | "features": { 2025-07-06 20:38:57.287327 | orchestrator | "quorum_con": "4540138322906710015", 2025-07-06 20:38:57.287338 | orchestrator | "quorum_mon": [ 2025-07-06 20:38:57.287349 | orchestrator | "kraken", 2025-07-06 20:38:57.287360 | orchestrator | "luminous", 2025-07-06 20:38:57.287371 | orchestrator | "mimic", 2025-07-06 20:38:57.287410 | orchestrator | "osdmap-prune", 2025-07-06 20:38:57.287422 | orchestrator | "nautilus", 2025-07-06 20:38:57.287433 | orchestrator | "octopus", 2025-07-06 20:38:57.287444 | orchestrator | "pacific", 2025-07-06 20:38:57.287454 | orchestrator | "elector-pinging", 2025-07-06 20:38:57.287465 | orchestrator | "quincy", 2025-07-06 20:38:57.287476 | orchestrator | "reef" 2025-07-06 20:38:57.287487 | orchestrator | ] 2025-07-06 20:38:57.287498 | orchestrator | }, 2025-07-06 20:38:57.287509 | orchestrator | "monmap": { 2025-07-06 20:38:57.287520 | orchestrator | "epoch": 1, 2025-07-06 20:38:57.287532 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-07-06 20:38:57.287546 | orchestrator | "modified": "2025-07-06T20:10:17.034746Z", 2025-07-06 20:38:57.287558 | orchestrator | "created": "2025-07-06T20:10:17.034746Z", 2025-07-06 20:38:57.287570 | orchestrator | "min_mon_release": 18, 2025-07-06 20:38:57.287583 | orchestrator | "min_mon_release_name": "reef", 2025-07-06 20:38:57.287595 | orchestrator | "election_strategy": 1, 2025-07-06 20:38:57.287608 | orchestrator | "disallowed_leaders: ": "", 2025-07-06 20:38:57.287646 | orchestrator | "stretch_mode": false, 2025-07-06 20:38:57.287659 | orchestrator | "tiebreaker_mon": "", 2025-07-06 20:38:57.287672 | orchestrator | "removed_ranks: ": "", 2025-07-06 20:38:57.287684 | orchestrator | "features": { 2025-07-06 20:38:57.287696 | orchestrator | "persistent": [ 2025-07-06 20:38:57.287708 | orchestrator | "kraken", 2025-07-06 20:38:57.287721 | orchestrator | "luminous", 2025-07-06 20:38:57.287733 | orchestrator | "mimic", 2025-07-06 20:38:57.287746 | orchestrator | "osdmap-prune", 2025-07-06 20:38:57.287758 | orchestrator | "nautilus", 2025-07-06 20:38:57.287770 | orchestrator | "octopus", 2025-07-06 20:38:57.287782 | orchestrator | "pacific", 2025-07-06 20:38:57.287795 | orchestrator | "elector-pinging", 2025-07-06 20:38:57.287807 | orchestrator | "quincy", 2025-07-06 20:38:57.287820 | orchestrator | "reef" 2025-07-06 20:38:57.287832 | orchestrator | ], 2025-07-06 20:38:57.287844 | orchestrator | "optional": [] 2025-07-06 20:38:57.287857 | orchestrator | }, 2025-07-06 20:38:57.287869 | orchestrator | "mons": [ 2025-07-06 20:38:57.287882 | orchestrator | { 2025-07-06 20:38:57.287894 | orchestrator | "rank": 0, 2025-07-06 20:38:57.287907 | orchestrator | "name": "testbed-node-0", 2025-07-06 20:38:57.287919 | orchestrator | "public_addrs": { 2025-07-06 20:38:57.287932 | orchestrator | "addrvec": [ 2025-07-06 20:38:57.287944 | orchestrator | { 2025-07-06 20:38:57.287955 | orchestrator | "type": "v2", 2025-07-06 20:38:57.287966 | orchestrator | "addr": "192.168.16.10:3300", 2025-07-06 20:38:57.287977 | orchestrator | "nonce": 0 2025-07-06 20:38:57.287988 | orchestrator | }, 2025-07-06 20:38:57.287999 | orchestrator | { 2025-07-06 20:38:57.288010 | orchestrator | "type": "v1", 2025-07-06 20:38:57.288020 | orchestrator | "addr": "192.168.16.10:6789", 2025-07-06 20:38:57.288048 | orchestrator | "nonce": 0 2025-07-06 20:38:57.288060 | orchestrator | } 2025-07-06 20:38:57.288071 | orchestrator | ] 2025-07-06 20:38:57.288082 | orchestrator | }, 2025-07-06 20:38:57.288092 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-07-06 20:38:57.288104 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-07-06 20:38:57.288115 | orchestrator | "priority": 0, 2025-07-06 20:38:57.288125 | orchestrator | "weight": 0, 2025-07-06 20:38:57.288136 | orchestrator | "crush_location": "{}" 2025-07-06 20:38:57.288147 | orchestrator | }, 2025-07-06 20:38:57.288158 | orchestrator | { 2025-07-06 20:38:57.288170 | orchestrator | "rank": 1, 2025-07-06 20:38:57.288181 | orchestrator | "name": "testbed-node-1", 2025-07-06 20:38:57.288191 | orchestrator | "public_addrs": { 2025-07-06 20:38:57.288202 | orchestrator | "addrvec": [ 2025-07-06 20:38:57.288213 | orchestrator | { 2025-07-06 20:38:57.288224 | orchestrator | "type": "v2", 2025-07-06 20:38:57.288235 | orchestrator | "addr": "192.168.16.11:3300", 2025-07-06 20:38:57.288246 | orchestrator | "nonce": 0 2025-07-06 20:38:57.288256 | orchestrator | }, 2025-07-06 20:38:57.288267 | orchestrator | { 2025-07-06 20:38:57.288278 | orchestrator | "type": "v1", 2025-07-06 20:38:57.288289 | orchestrator | "addr": "192.168.16.11:6789", 2025-07-06 20:38:57.288300 | orchestrator | "nonce": 0 2025-07-06 20:38:57.288311 | orchestrator | } 2025-07-06 20:38:57.288322 | orchestrator | ] 2025-07-06 20:38:57.288333 | orchestrator | }, 2025-07-06 20:38:57.288343 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-07-06 20:38:57.288355 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-07-06 20:38:57.288374 | orchestrator | "priority": 0, 2025-07-06 20:38:57.288386 | orchestrator | "weight": 0, 2025-07-06 20:38:57.288396 | orchestrator | "crush_location": "{}" 2025-07-06 20:38:57.288407 | orchestrator | }, 2025-07-06 20:38:57.288418 | orchestrator | { 2025-07-06 20:38:57.288429 | orchestrator | "rank": 2, 2025-07-06 20:38:57.288440 | orchestrator | "name": "testbed-node-2", 2025-07-06 20:38:57.288451 | orchestrator | "public_addrs": { 2025-07-06 20:38:57.288462 | orchestrator | "addrvec": [ 2025-07-06 20:38:57.288473 | orchestrator | { 2025-07-06 20:38:57.288484 | orchestrator | "type": "v2", 2025-07-06 20:38:57.288495 | orchestrator | "addr": "192.168.16.12:3300", 2025-07-06 20:38:57.288506 | orchestrator | "nonce": 0 2025-07-06 20:38:57.288517 | orchestrator | }, 2025-07-06 20:38:57.288528 | orchestrator | { 2025-07-06 20:38:57.288538 | orchestrator | "type": "v1", 2025-07-06 20:38:57.288549 | orchestrator | "addr": "192.168.16.12:6789", 2025-07-06 20:38:57.288560 | orchestrator | "nonce": 0 2025-07-06 20:38:57.288571 | orchestrator | } 2025-07-06 20:38:57.288582 | orchestrator | ] 2025-07-06 20:38:57.288593 | orchestrator | }, 2025-07-06 20:38:57.288603 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-07-06 20:38:57.288631 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-07-06 20:38:57.288643 | orchestrator | "priority": 0, 2025-07-06 20:38:57.288654 | orchestrator | "weight": 0, 2025-07-06 20:38:57.288665 | orchestrator | "crush_location": "{}" 2025-07-06 20:38:57.288677 | orchestrator | } 2025-07-06 20:38:57.288688 | orchestrator | ] 2025-07-06 20:38:57.288698 | orchestrator | } 2025-07-06 20:38:57.288709 | orchestrator | } 2025-07-06 20:38:57.288843 | orchestrator | 2025-07-06 20:38:57.288860 | orchestrator | # Ceph free space status 2025-07-06 20:38:57.288872 | orchestrator | 2025-07-06 20:38:57.288882 | orchestrator | + echo 2025-07-06 20:38:57.288893 | orchestrator | + echo '# Ceph free space status' 2025-07-06 20:38:57.288904 | orchestrator | + echo 2025-07-06 20:38:57.288915 | orchestrator | + ceph df 2025-07-06 20:38:57.884053 | orchestrator | --- RAW STORAGE --- 2025-07-06 20:38:57.884177 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-07-06 20:38:57.884198 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-06 20:38:57.884210 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-06 20:38:57.884221 | orchestrator | 2025-07-06 20:38:57.884233 | orchestrator | --- POOLS --- 2025-07-06 20:38:57.884245 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-07-06 20:38:57.884257 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-07-06 20:38:57.884268 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-07-06 20:38:57.884279 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-07-06 20:38:57.884290 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-07-06 20:38:57.884301 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-07-06 20:38:57.884312 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-07-06 20:38:57.884323 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-07-06 20:38:57.884334 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-07-06 20:38:57.884344 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-07-06 20:38:57.884355 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-07-06 20:38:57.884366 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-07-06 20:38:57.884377 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.91 35 GiB 2025-07-06 20:38:57.884388 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-07-06 20:38:57.884399 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-07-06 20:38:57.932931 | orchestrator | ++ semver latest 5.0.0 2025-07-06 20:38:57.980902 | orchestrator | + [[ -1 -eq -1 ]] 2025-07-06 20:38:57.981000 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-07-06 20:38:57.981016 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-07-06 20:38:57.981028 | orchestrator | + osism apply facts 2025-07-06 20:38:59.888117 | orchestrator | 2025-07-06 20:38:59 | INFO  | Task bcca156a-e0b9-478f-b9c2-d8e66e24e19e (facts) was prepared for execution. 2025-07-06 20:38:59.888221 | orchestrator | 2025-07-06 20:38:59 | INFO  | It takes a moment until task bcca156a-e0b9-478f-b9c2-d8e66e24e19e (facts) has been started and output is visible here. 2025-07-06 20:39:12.946620 | orchestrator | 2025-07-06 20:39:12.946808 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-06 20:39:12.946828 | orchestrator | 2025-07-06 20:39:12.946841 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-06 20:39:12.946853 | orchestrator | Sunday 06 July 2025 20:39:03 +0000 (0:00:00.275) 0:00:00.275 *********** 2025-07-06 20:39:12.946865 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:12.946877 | orchestrator | ok: [testbed-manager] 2025-07-06 20:39:12.946889 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:39:12.946900 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:39:12.946911 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:39:12.946922 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:39:12.946932 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:39:12.946943 | orchestrator | 2025-07-06 20:39:12.946955 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-06 20:39:12.946966 | orchestrator | Sunday 06 July 2025 20:39:05 +0000 (0:00:01.547) 0:00:01.822 *********** 2025-07-06 20:39:12.946977 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:39:12.946989 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:12.947001 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:39:12.947012 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:39:12.947023 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:39:12.947034 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:39:12.947045 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:39:12.947056 | orchestrator | 2025-07-06 20:39:12.947067 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-06 20:39:12.947078 | orchestrator | 2025-07-06 20:39:12.947089 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-06 20:39:12.947100 | orchestrator | Sunday 06 July 2025 20:39:06 +0000 (0:00:01.264) 0:00:03.087 *********** 2025-07-06 20:39:12.947111 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:12.947122 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:39:12.947133 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:39:12.947144 | orchestrator | ok: [testbed-manager] 2025-07-06 20:39:12.947156 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:39:12.947168 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:39:12.947181 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:39:12.947194 | orchestrator | 2025-07-06 20:39:12.947208 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-06 20:39:12.947221 | orchestrator | 2025-07-06 20:39:12.947234 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-06 20:39:12.947247 | orchestrator | Sunday 06 July 2025 20:39:12 +0000 (0:00:05.260) 0:00:08.348 *********** 2025-07-06 20:39:12.947259 | orchestrator | skipping: [testbed-manager] 2025-07-06 20:39:12.947272 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:12.947285 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:39:12.947298 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:39:12.947311 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:39:12.947323 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:39:12.947336 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:39:12.947349 | orchestrator | 2025-07-06 20:39:12.947361 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:39:12.947429 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:39:12.947444 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:39:12.947484 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:39:12.947498 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:39:12.947511 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:39:12.947524 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:39:12.947535 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:39:12.947546 | orchestrator | 2025-07-06 20:39:12.947557 | orchestrator | 2025-07-06 20:39:12.947569 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:39:12.947580 | orchestrator | Sunday 06 July 2025 20:39:12 +0000 (0:00:00.548) 0:00:08.896 *********** 2025-07-06 20:39:12.947591 | orchestrator | =============================================================================== 2025-07-06 20:39:12.947602 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.26s 2025-07-06 20:39:12.947613 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.55s 2025-07-06 20:39:12.947660 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-07-06 20:39:12.947675 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-07-06 20:39:13.217791 | orchestrator | + osism validate ceph-mons 2025-07-06 20:39:44.781998 | orchestrator | 2025-07-06 20:39:44.782166 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-07-06 20:39:44.782184 | orchestrator | 2025-07-06 20:39:44.782196 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-06 20:39:44.782208 | orchestrator | Sunday 06 July 2025 20:39:29 +0000 (0:00:00.444) 0:00:00.444 *********** 2025-07-06 20:39:44.782220 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:39:44.782231 | orchestrator | 2025-07-06 20:39:44.782242 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-06 20:39:44.782254 | orchestrator | Sunday 06 July 2025 20:39:30 +0000 (0:00:00.617) 0:00:01.062 *********** 2025-07-06 20:39:44.782265 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:39:44.782276 | orchestrator | 2025-07-06 20:39:44.782287 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-06 20:39:44.782298 | orchestrator | Sunday 06 July 2025 20:39:30 +0000 (0:00:00.846) 0:00:01.909 *********** 2025-07-06 20:39:44.782310 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.782322 | orchestrator | 2025-07-06 20:39:44.782333 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-06 20:39:44.782385 | orchestrator | Sunday 06 July 2025 20:39:31 +0000 (0:00:00.235) 0:00:02.145 *********** 2025-07-06 20:39:44.782398 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.782410 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:39:44.782421 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:39:44.782433 | orchestrator | 2025-07-06 20:39:44.782445 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-06 20:39:44.782456 | orchestrator | Sunday 06 July 2025 20:39:31 +0000 (0:00:00.286) 0:00:02.432 *********** 2025-07-06 20:39:44.782467 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:39:44.782478 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.782489 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:39:44.782500 | orchestrator | 2025-07-06 20:39:44.782530 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-06 20:39:44.782544 | orchestrator | Sunday 06 July 2025 20:39:32 +0000 (0:00:01.012) 0:00:03.444 *********** 2025-07-06 20:39:44.782557 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.782590 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:39:44.782603 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:39:44.782616 | orchestrator | 2025-07-06 20:39:44.782629 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-06 20:39:44.782642 | orchestrator | Sunday 06 July 2025 20:39:32 +0000 (0:00:00.272) 0:00:03.716 *********** 2025-07-06 20:39:44.782704 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.782726 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:39:44.782746 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:39:44.782765 | orchestrator | 2025-07-06 20:39:44.782779 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:39:44.782792 | orchestrator | Sunday 06 July 2025 20:39:33 +0000 (0:00:00.462) 0:00:04.179 *********** 2025-07-06 20:39:44.782804 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.782817 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:39:44.782829 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:39:44.782842 | orchestrator | 2025-07-06 20:39:44.782854 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-07-06 20:39:44.782866 | orchestrator | Sunday 06 July 2025 20:39:33 +0000 (0:00:00.313) 0:00:04.493 *********** 2025-07-06 20:39:44.782879 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.782892 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:39:44.782903 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:39:44.782913 | orchestrator | 2025-07-06 20:39:44.782925 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-07-06 20:39:44.782936 | orchestrator | Sunday 06 July 2025 20:39:33 +0000 (0:00:00.295) 0:00:04.788 *********** 2025-07-06 20:39:44.782947 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.782957 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:39:44.782975 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:39:44.782992 | orchestrator | 2025-07-06 20:39:44.783006 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:39:44.783025 | orchestrator | Sunday 06 July 2025 20:39:34 +0000 (0:00:00.295) 0:00:05.083 *********** 2025-07-06 20:39:44.783037 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.783048 | orchestrator | 2025-07-06 20:39:44.783065 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:39:44.783077 | orchestrator | Sunday 06 July 2025 20:39:34 +0000 (0:00:00.242) 0:00:05.325 *********** 2025-07-06 20:39:44.783087 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.783098 | orchestrator | 2025-07-06 20:39:44.783109 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:39:44.783120 | orchestrator | Sunday 06 July 2025 20:39:35 +0000 (0:00:00.655) 0:00:05.981 *********** 2025-07-06 20:39:44.783132 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.783142 | orchestrator | 2025-07-06 20:39:44.783153 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:39:44.783164 | orchestrator | Sunday 06 July 2025 20:39:35 +0000 (0:00:00.244) 0:00:06.225 *********** 2025-07-06 20:39:44.783175 | orchestrator | 2025-07-06 20:39:44.783187 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:39:44.783198 | orchestrator | Sunday 06 July 2025 20:39:35 +0000 (0:00:00.067) 0:00:06.293 *********** 2025-07-06 20:39:44.783209 | orchestrator | 2025-07-06 20:39:44.783220 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:39:44.783231 | orchestrator | Sunday 06 July 2025 20:39:35 +0000 (0:00:00.068) 0:00:06.361 *********** 2025-07-06 20:39:44.783242 | orchestrator | 2025-07-06 20:39:44.783253 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:39:44.783264 | orchestrator | Sunday 06 July 2025 20:39:35 +0000 (0:00:00.075) 0:00:06.436 *********** 2025-07-06 20:39:44.783275 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.783286 | orchestrator | 2025-07-06 20:39:44.783297 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-06 20:39:44.783308 | orchestrator | Sunday 06 July 2025 20:39:35 +0000 (0:00:00.241) 0:00:06.677 *********** 2025-07-06 20:39:44.783331 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.783343 | orchestrator | 2025-07-06 20:39:44.783372 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-07-06 20:39:44.783383 | orchestrator | Sunday 06 July 2025 20:39:35 +0000 (0:00:00.232) 0:00:06.910 *********** 2025-07-06 20:39:44.783394 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.783405 | orchestrator | 2025-07-06 20:39:44.783417 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-07-06 20:39:44.783428 | orchestrator | Sunday 06 July 2025 20:39:36 +0000 (0:00:00.113) 0:00:07.024 *********** 2025-07-06 20:39:44.783438 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:39:44.783449 | orchestrator | 2025-07-06 20:39:44.783460 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-07-06 20:39:44.783471 | orchestrator | Sunday 06 July 2025 20:39:37 +0000 (0:00:01.588) 0:00:08.612 *********** 2025-07-06 20:39:44.783482 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.783493 | orchestrator | 2025-07-06 20:39:44.783504 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-07-06 20:39:44.783515 | orchestrator | Sunday 06 July 2025 20:39:37 +0000 (0:00:00.301) 0:00:08.914 *********** 2025-07-06 20:39:44.783526 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.783537 | orchestrator | 2025-07-06 20:39:44.783548 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-07-06 20:39:44.783559 | orchestrator | Sunday 06 July 2025 20:39:38 +0000 (0:00:00.149) 0:00:09.064 *********** 2025-07-06 20:39:44.783570 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.783581 | orchestrator | 2025-07-06 20:39:44.783592 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-07-06 20:39:44.783603 | orchestrator | Sunday 06 July 2025 20:39:38 +0000 (0:00:00.454) 0:00:09.518 *********** 2025-07-06 20:39:44.783614 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.783625 | orchestrator | 2025-07-06 20:39:44.783636 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-07-06 20:39:44.783647 | orchestrator | Sunday 06 July 2025 20:39:38 +0000 (0:00:00.319) 0:00:09.837 *********** 2025-07-06 20:39:44.783697 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.783708 | orchestrator | 2025-07-06 20:39:44.783720 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-07-06 20:39:44.783731 | orchestrator | Sunday 06 July 2025 20:39:38 +0000 (0:00:00.120) 0:00:09.958 *********** 2025-07-06 20:39:44.783742 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.783753 | orchestrator | 2025-07-06 20:39:44.783764 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-07-06 20:39:44.783775 | orchestrator | Sunday 06 July 2025 20:39:39 +0000 (0:00:00.136) 0:00:10.094 *********** 2025-07-06 20:39:44.783786 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.783797 | orchestrator | 2025-07-06 20:39:44.783808 | orchestrator | TASK [Gather status data] ****************************************************** 2025-07-06 20:39:44.783819 | orchestrator | Sunday 06 July 2025 20:39:39 +0000 (0:00:00.120) 0:00:10.215 *********** 2025-07-06 20:39:44.783830 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:39:44.783841 | orchestrator | 2025-07-06 20:39:44.783852 | orchestrator | TASK [Set health test data] **************************************************** 2025-07-06 20:39:44.783863 | orchestrator | Sunday 06 July 2025 20:39:40 +0000 (0:00:01.281) 0:00:11.496 *********** 2025-07-06 20:39:44.783874 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.783885 | orchestrator | 2025-07-06 20:39:44.783897 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-07-06 20:39:44.783908 | orchestrator | Sunday 06 July 2025 20:39:40 +0000 (0:00:00.305) 0:00:11.801 *********** 2025-07-06 20:39:44.783919 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.783930 | orchestrator | 2025-07-06 20:39:44.783941 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-07-06 20:39:44.783952 | orchestrator | Sunday 06 July 2025 20:39:40 +0000 (0:00:00.124) 0:00:11.926 *********** 2025-07-06 20:39:44.783970 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:39:44.783981 | orchestrator | 2025-07-06 20:39:44.783993 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-07-06 20:39:44.784004 | orchestrator | Sunday 06 July 2025 20:39:41 +0000 (0:00:00.152) 0:00:12.079 *********** 2025-07-06 20:39:44.784015 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.784026 | orchestrator | 2025-07-06 20:39:44.784037 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-07-06 20:39:44.784048 | orchestrator | Sunday 06 July 2025 20:39:41 +0000 (0:00:00.127) 0:00:12.207 *********** 2025-07-06 20:39:44.784059 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.784070 | orchestrator | 2025-07-06 20:39:44.784082 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-06 20:39:44.784093 | orchestrator | Sunday 06 July 2025 20:39:41 +0000 (0:00:00.137) 0:00:12.344 *********** 2025-07-06 20:39:44.784104 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:39:44.784115 | orchestrator | 2025-07-06 20:39:44.784126 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-06 20:39:44.784137 | orchestrator | Sunday 06 July 2025 20:39:41 +0000 (0:00:00.480) 0:00:12.825 *********** 2025-07-06 20:39:44.784148 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:39:44.784159 | orchestrator | 2025-07-06 20:39:44.784171 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:39:44.784182 | orchestrator | Sunday 06 July 2025 20:39:42 +0000 (0:00:00.611) 0:00:13.437 *********** 2025-07-06 20:39:44.784193 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:39:44.784204 | orchestrator | 2025-07-06 20:39:44.784215 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:39:44.784226 | orchestrator | Sunday 06 July 2025 20:39:44 +0000 (0:00:01.551) 0:00:14.988 *********** 2025-07-06 20:39:44.784237 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:39:44.784248 | orchestrator | 2025-07-06 20:39:44.784259 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:39:44.784270 | orchestrator | Sunday 06 July 2025 20:39:44 +0000 (0:00:00.258) 0:00:15.246 *********** 2025-07-06 20:39:44.784281 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:39:44.784292 | orchestrator | 2025-07-06 20:39:44.784310 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:39:46.775245 | orchestrator | Sunday 06 July 2025 20:39:44 +0000 (0:00:00.259) 0:00:15.506 *********** 2025-07-06 20:39:46.775360 | orchestrator | 2025-07-06 20:39:46.775385 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:39:46.775406 | orchestrator | Sunday 06 July 2025 20:39:44 +0000 (0:00:00.069) 0:00:15.576 *********** 2025-07-06 20:39:46.775428 | orchestrator | 2025-07-06 20:39:46.775441 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:39:46.775452 | orchestrator | Sunday 06 July 2025 20:39:44 +0000 (0:00:00.083) 0:00:15.659 *********** 2025-07-06 20:39:46.775464 | orchestrator | 2025-07-06 20:39:46.775475 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-06 20:39:46.775486 | orchestrator | Sunday 06 July 2025 20:39:44 +0000 (0:00:00.070) 0:00:15.730 *********** 2025-07-06 20:39:46.775498 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:39:46.775509 | orchestrator | 2025-07-06 20:39:46.775521 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:39:46.775532 | orchestrator | Sunday 06 July 2025 20:39:46 +0000 (0:00:01.235) 0:00:16.965 *********** 2025-07-06 20:39:46.775543 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-06 20:39:46.775554 | orchestrator |  "msg": [ 2025-07-06 20:39:46.775570 | orchestrator |  "Validator run completed.", 2025-07-06 20:39:46.775590 | orchestrator |  "You can find the report file here:", 2025-07-06 20:39:46.775642 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-07-06T20:39:29+00:00-report.json", 2025-07-06 20:39:46.775722 | orchestrator |  "on the following host:", 2025-07-06 20:39:46.775735 | orchestrator |  "testbed-manager" 2025-07-06 20:39:46.775746 | orchestrator |  ] 2025-07-06 20:39:46.775758 | orchestrator | } 2025-07-06 20:39:46.775771 | orchestrator | 2025-07-06 20:39:46.775784 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:39:46.775798 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-06 20:39:46.775816 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:39:46.775830 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:39:46.775843 | orchestrator | 2025-07-06 20:39:46.775855 | orchestrator | 2025-07-06 20:39:46.775868 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:39:46.775881 | orchestrator | Sunday 06 July 2025 20:39:46 +0000 (0:00:00.393) 0:00:17.358 *********** 2025-07-06 20:39:46.775893 | orchestrator | =============================================================================== 2025-07-06 20:39:46.775906 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.59s 2025-07-06 20:39:46.775919 | orchestrator | Aggregate test results step one ----------------------------------------- 1.55s 2025-07-06 20:39:46.775931 | orchestrator | Gather status data ------------------------------------------------------ 1.28s 2025-07-06 20:39:46.775961 | orchestrator | Write report file ------------------------------------------------------- 1.24s 2025-07-06 20:39:46.775975 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2025-07-06 20:39:46.775988 | orchestrator | Create report output directory ------------------------------------------ 0.85s 2025-07-06 20:39:46.776000 | orchestrator | Aggregate test results step two ----------------------------------------- 0.66s 2025-07-06 20:39:46.776012 | orchestrator | Get timestamp for report file ------------------------------------------- 0.62s 2025-07-06 20:39:46.776025 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.61s 2025-07-06 20:39:46.776043 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.48s 2025-07-06 20:39:46.776056 | orchestrator | Set test result to passed if container is existing ---------------------- 0.46s 2025-07-06 20:39:46.776068 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.45s 2025-07-06 20:39:46.776079 | orchestrator | Print report file information ------------------------------------------- 0.39s 2025-07-06 20:39:46.776090 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2025-07-06 20:39:46.776101 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-07-06 20:39:46.776111 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2025-07-06 20:39:46.776122 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2025-07-06 20:39:46.776133 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2025-07-06 20:39:46.776144 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-07-06 20:39:46.776154 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-07-06 20:39:47.025202 | orchestrator | + osism validate ceph-mgrs 2025-07-06 20:40:17.314165 | orchestrator | 2025-07-06 20:40:17.314301 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-07-06 20:40:17.314320 | orchestrator | 2025-07-06 20:40:17.314332 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-06 20:40:17.314344 | orchestrator | Sunday 06 July 2025 20:40:03 +0000 (0:00:00.451) 0:00:00.451 *********** 2025-07-06 20:40:17.314356 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:17.314393 | orchestrator | 2025-07-06 20:40:17.314405 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-06 20:40:17.314416 | orchestrator | Sunday 06 July 2025 20:40:03 +0000 (0:00:00.683) 0:00:01.134 *********** 2025-07-06 20:40:17.314427 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:17.314438 | orchestrator | 2025-07-06 20:40:17.314449 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-06 20:40:17.314459 | orchestrator | Sunday 06 July 2025 20:40:04 +0000 (0:00:00.817) 0:00:01.952 *********** 2025-07-06 20:40:17.314470 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:40:17.314482 | orchestrator | 2025-07-06 20:40:17.314493 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-06 20:40:17.314504 | orchestrator | Sunday 06 July 2025 20:40:04 +0000 (0:00:00.219) 0:00:02.171 *********** 2025-07-06 20:40:17.314515 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:40:17.314526 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:40:17.314537 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:40:17.314548 | orchestrator | 2025-07-06 20:40:17.314559 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-06 20:40:17.314570 | orchestrator | Sunday 06 July 2025 20:40:05 +0000 (0:00:00.286) 0:00:02.458 *********** 2025-07-06 20:40:17.314581 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:40:17.314592 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:40:17.314603 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:40:17.314613 | orchestrator | 2025-07-06 20:40:17.314624 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-06 20:40:17.314635 | orchestrator | Sunday 06 July 2025 20:40:06 +0000 (0:00:00.976) 0:00:03.434 *********** 2025-07-06 20:40:17.314649 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:40:17.314662 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:40:17.314674 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:40:17.314719 | orchestrator | 2025-07-06 20:40:17.314733 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-06 20:40:17.314746 | orchestrator | Sunday 06 July 2025 20:40:06 +0000 (0:00:00.281) 0:00:03.715 *********** 2025-07-06 20:40:17.314758 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:40:17.314770 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:40:17.314783 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:40:17.314795 | orchestrator | 2025-07-06 20:40:17.314807 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:40:17.314820 | orchestrator | Sunday 06 July 2025 20:40:06 +0000 (0:00:00.451) 0:00:04.167 *********** 2025-07-06 20:40:17.314832 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:40:17.314845 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:40:17.314857 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:40:17.314870 | orchestrator | 2025-07-06 20:40:17.314882 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-07-06 20:40:17.314894 | orchestrator | Sunday 06 July 2025 20:40:07 +0000 (0:00:00.298) 0:00:04.465 *********** 2025-07-06 20:40:17.314907 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:40:17.314920 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:40:17.314932 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:40:17.314944 | orchestrator | 2025-07-06 20:40:17.314957 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-07-06 20:40:17.314970 | orchestrator | Sunday 06 July 2025 20:40:07 +0000 (0:00:00.290) 0:00:04.755 *********** 2025-07-06 20:40:17.314983 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:40:17.314995 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:40:17.315006 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:40:17.315017 | orchestrator | 2025-07-06 20:40:17.315027 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:40:17.315038 | orchestrator | Sunday 06 July 2025 20:40:07 +0000 (0:00:00.289) 0:00:05.045 *********** 2025-07-06 20:40:17.315049 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:40:17.315069 | orchestrator | 2025-07-06 20:40:17.315080 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:40:17.315091 | orchestrator | Sunday 06 July 2025 20:40:07 +0000 (0:00:00.230) 0:00:05.275 *********** 2025-07-06 20:40:17.315102 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:40:17.315113 | orchestrator | 2025-07-06 20:40:17.315124 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:40:17.315135 | orchestrator | Sunday 06 July 2025 20:40:08 +0000 (0:00:00.614) 0:00:05.890 *********** 2025-07-06 20:40:17.315146 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:40:17.315156 | orchestrator | 2025-07-06 20:40:17.315183 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:17.315195 | orchestrator | Sunday 06 July 2025 20:40:08 +0000 (0:00:00.239) 0:00:06.129 *********** 2025-07-06 20:40:17.315206 | orchestrator | 2025-07-06 20:40:17.315216 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:17.315227 | orchestrator | Sunday 06 July 2025 20:40:08 +0000 (0:00:00.066) 0:00:06.195 *********** 2025-07-06 20:40:17.315238 | orchestrator | 2025-07-06 20:40:17.315249 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:17.315260 | orchestrator | Sunday 06 July 2025 20:40:08 +0000 (0:00:00.066) 0:00:06.261 *********** 2025-07-06 20:40:17.315270 | orchestrator | 2025-07-06 20:40:17.315281 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:40:17.315292 | orchestrator | Sunday 06 July 2025 20:40:09 +0000 (0:00:00.071) 0:00:06.333 *********** 2025-07-06 20:40:17.315303 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:40:17.315314 | orchestrator | 2025-07-06 20:40:17.315325 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-06 20:40:17.315336 | orchestrator | Sunday 06 July 2025 20:40:09 +0000 (0:00:00.264) 0:00:06.598 *********** 2025-07-06 20:40:17.315366 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:40:17.315378 | orchestrator | 2025-07-06 20:40:17.315408 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-07-06 20:40:17.315419 | orchestrator | Sunday 06 July 2025 20:40:09 +0000 (0:00:00.252) 0:00:06.851 *********** 2025-07-06 20:40:17.315430 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:40:17.315441 | orchestrator | 2025-07-06 20:40:17.315452 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-07-06 20:40:17.315463 | orchestrator | Sunday 06 July 2025 20:40:09 +0000 (0:00:00.129) 0:00:06.980 *********** 2025-07-06 20:40:17.315473 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:40:17.315484 | orchestrator | 2025-07-06 20:40:17.315495 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-07-06 20:40:17.315506 | orchestrator | Sunday 06 July 2025 20:40:11 +0000 (0:00:01.906) 0:00:08.887 *********** 2025-07-06 20:40:17.315516 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:40:17.315527 | orchestrator | 2025-07-06 20:40:17.315538 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-07-06 20:40:17.315549 | orchestrator | Sunday 06 July 2025 20:40:11 +0000 (0:00:00.243) 0:00:09.131 *********** 2025-07-06 20:40:17.315559 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:40:17.315570 | orchestrator | 2025-07-06 20:40:17.315581 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-07-06 20:40:17.315592 | orchestrator | Sunday 06 July 2025 20:40:12 +0000 (0:00:00.291) 0:00:09.422 *********** 2025-07-06 20:40:17.315602 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:40:17.315613 | orchestrator | 2025-07-06 20:40:17.315624 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-07-06 20:40:17.315635 | orchestrator | Sunday 06 July 2025 20:40:12 +0000 (0:00:00.329) 0:00:09.752 *********** 2025-07-06 20:40:17.315646 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:40:17.315656 | orchestrator | 2025-07-06 20:40:17.315667 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-06 20:40:17.315678 | orchestrator | Sunday 06 July 2025 20:40:12 +0000 (0:00:00.147) 0:00:09.899 *********** 2025-07-06 20:40:17.315740 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:17.315753 | orchestrator | 2025-07-06 20:40:17.315764 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-06 20:40:17.315774 | orchestrator | Sunday 06 July 2025 20:40:12 +0000 (0:00:00.252) 0:00:10.151 *********** 2025-07-06 20:40:17.315785 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:40:17.315796 | orchestrator | 2025-07-06 20:40:17.315807 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:40:17.315818 | orchestrator | Sunday 06 July 2025 20:40:13 +0000 (0:00:00.243) 0:00:10.395 *********** 2025-07-06 20:40:17.315828 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:17.315852 | orchestrator | 2025-07-06 20:40:17.315863 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:40:17.315874 | orchestrator | Sunday 06 July 2025 20:40:14 +0000 (0:00:01.227) 0:00:11.623 *********** 2025-07-06 20:40:17.315885 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:17.315896 | orchestrator | 2025-07-06 20:40:17.315907 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:40:17.315917 | orchestrator | Sunday 06 July 2025 20:40:14 +0000 (0:00:00.257) 0:00:11.880 *********** 2025-07-06 20:40:17.315928 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:17.315939 | orchestrator | 2025-07-06 20:40:17.315949 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:17.315960 | orchestrator | Sunday 06 July 2025 20:40:14 +0000 (0:00:00.254) 0:00:12.134 *********** 2025-07-06 20:40:17.315970 | orchestrator | 2025-07-06 20:40:17.315981 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:17.315992 | orchestrator | Sunday 06 July 2025 20:40:14 +0000 (0:00:00.066) 0:00:12.201 *********** 2025-07-06 20:40:17.316003 | orchestrator | 2025-07-06 20:40:17.316014 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:17.316024 | orchestrator | Sunday 06 July 2025 20:40:14 +0000 (0:00:00.067) 0:00:12.268 *********** 2025-07-06 20:40:17.316035 | orchestrator | 2025-07-06 20:40:17.316046 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-06 20:40:17.316057 | orchestrator | Sunday 06 July 2025 20:40:15 +0000 (0:00:00.086) 0:00:12.355 *********** 2025-07-06 20:40:17.316067 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:17.316078 | orchestrator | 2025-07-06 20:40:17.316089 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:40:17.316100 | orchestrator | Sunday 06 July 2025 20:40:16 +0000 (0:00:01.443) 0:00:13.798 *********** 2025-07-06 20:40:17.316110 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-06 20:40:17.316121 | orchestrator |  "msg": [ 2025-07-06 20:40:17.316132 | orchestrator |  "Validator run completed.", 2025-07-06 20:40:17.316143 | orchestrator |  "You can find the report file here:", 2025-07-06 20:40:17.316154 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-07-06T20:40:03+00:00-report.json", 2025-07-06 20:40:17.316166 | orchestrator |  "on the following host:", 2025-07-06 20:40:17.316177 | orchestrator |  "testbed-manager" 2025-07-06 20:40:17.316188 | orchestrator |  ] 2025-07-06 20:40:17.316199 | orchestrator | } 2025-07-06 20:40:17.316210 | orchestrator | 2025-07-06 20:40:17.316221 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:40:17.316233 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-06 20:40:17.316246 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:40:17.316265 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:40:17.577536 | orchestrator | 2025-07-06 20:40:17.577642 | orchestrator | 2025-07-06 20:40:17.577659 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:40:17.577673 | orchestrator | Sunday 06 July 2025 20:40:17 +0000 (0:00:00.787) 0:00:14.586 *********** 2025-07-06 20:40:17.577735 | orchestrator | =============================================================================== 2025-07-06 20:40:17.577750 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.91s 2025-07-06 20:40:17.577762 | orchestrator | Write report file ------------------------------------------------------- 1.44s 2025-07-06 20:40:17.577773 | orchestrator | Aggregate test results step one ----------------------------------------- 1.23s 2025-07-06 20:40:17.577784 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-07-06 20:40:17.577795 | orchestrator | Create report output directory ------------------------------------------ 0.82s 2025-07-06 20:40:17.577807 | orchestrator | Print report file information ------------------------------------------- 0.79s 2025-07-06 20:40:17.577840 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-07-06 20:40:17.577853 | orchestrator | Aggregate test results step two ----------------------------------------- 0.61s 2025-07-06 20:40:17.577873 | orchestrator | Set test result to passed if container is existing ---------------------- 0.45s 2025-07-06 20:40:17.577890 | orchestrator | Fail test if mgr modules are disabled that should be enabled ------------ 0.33s 2025-07-06 20:40:17.577908 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-07-06 20:40:17.577928 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.29s 2025-07-06 20:40:17.577947 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-07-06 20:40:17.577967 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.29s 2025-07-06 20:40:17.577984 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-07-06 20:40:17.578000 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-07-06 20:40:17.578012 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-07-06 20:40:17.578083 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-07-06 20:40:17.578097 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2025-07-06 20:40:17.578110 | orchestrator | Fail due to missing containers ------------------------------------------ 0.25s 2025-07-06 20:40:17.848097 | orchestrator | + osism validate ceph-osds 2025-07-06 20:40:38.229899 | orchestrator | 2025-07-06 20:40:38.229998 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-07-06 20:40:38.230011 | orchestrator | 2025-07-06 20:40:38.230066 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-06 20:40:38.230075 | orchestrator | Sunday 06 July 2025 20:40:34 +0000 (0:00:00.415) 0:00:00.415 *********** 2025-07-06 20:40:38.230084 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:38.230091 | orchestrator | 2025-07-06 20:40:38.230099 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-06 20:40:38.230107 | orchestrator | Sunday 06 July 2025 20:40:34 +0000 (0:00:00.625) 0:00:01.040 *********** 2025-07-06 20:40:38.230114 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:38.230122 | orchestrator | 2025-07-06 20:40:38.230129 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-06 20:40:38.230137 | orchestrator | Sunday 06 July 2025 20:40:34 +0000 (0:00:00.231) 0:00:01.272 *********** 2025-07-06 20:40:38.230144 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:38.230151 | orchestrator | 2025-07-06 20:40:38.230159 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-06 20:40:38.230166 | orchestrator | Sunday 06 July 2025 20:40:35 +0000 (0:00:00.967) 0:00:02.239 *********** 2025-07-06 20:40:38.230201 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:38.230210 | orchestrator | 2025-07-06 20:40:38.230217 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-06 20:40:38.230225 | orchestrator | Sunday 06 July 2025 20:40:36 +0000 (0:00:00.130) 0:00:02.370 *********** 2025-07-06 20:40:38.230232 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:38.230240 | orchestrator | 2025-07-06 20:40:38.230248 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-06 20:40:38.230268 | orchestrator | Sunday 06 July 2025 20:40:36 +0000 (0:00:00.135) 0:00:02.505 *********** 2025-07-06 20:40:38.230276 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:38.230284 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:40:38.230291 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:40:38.230298 | orchestrator | 2025-07-06 20:40:38.230306 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-06 20:40:38.230313 | orchestrator | Sunday 06 July 2025 20:40:36 +0000 (0:00:00.321) 0:00:02.826 *********** 2025-07-06 20:40:38.230321 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:38.230329 | orchestrator | 2025-07-06 20:40:38.230338 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-06 20:40:38.230347 | orchestrator | Sunday 06 July 2025 20:40:36 +0000 (0:00:00.151) 0:00:02.978 *********** 2025-07-06 20:40:38.230355 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:38.230364 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:38.230372 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:38.230381 | orchestrator | 2025-07-06 20:40:38.230389 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-07-06 20:40:38.230397 | orchestrator | Sunday 06 July 2025 20:40:36 +0000 (0:00:00.315) 0:00:03.294 *********** 2025-07-06 20:40:38.230406 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:38.230414 | orchestrator | 2025-07-06 20:40:38.230422 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:40:38.230431 | orchestrator | Sunday 06 July 2025 20:40:37 +0000 (0:00:00.540) 0:00:03.834 *********** 2025-07-06 20:40:38.230438 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:38.230446 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:38.230458 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:38.230470 | orchestrator | 2025-07-06 20:40:38.230483 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-07-06 20:40:38.230495 | orchestrator | Sunday 06 July 2025 20:40:37 +0000 (0:00:00.476) 0:00:04.310 *********** 2025-07-06 20:40:38.230510 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e13a0e59b14e66a2c92977f6fa88eacc9658c313fc2b43855bbddc44985d305a', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-07-06 20:40:38.230526 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9d855c4961021b434c45f6365279ab92e45dead453c9ce4afcae227779427c0f', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-06 20:40:38.230541 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3c8fb49d9fee275c4ae62a7ede828b984f8903945055e9420e20f686fbb2c0af', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-06 20:40:38.230557 | orchestrator | skipping: [testbed-node-3] => (item={'id': '43508e988026b574ed4677f3e5a4c7f3dda0424b73f28859f7da99e85a9918fa', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:40:38.230570 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6f84f3f463c36295161f49e35ae49906521eb3717b29e4422f49dc1bc9afcfb9', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:40:38.230609 | orchestrator | skipping: [testbed-node-3] => (item={'id': '51e2c18919182c1f1d44f5425809885ccd76d2ca61f76e0740b97433aa768af0', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-06 20:40:38.230624 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'de6a4e1fdae13752b52a7d9bc71eb86e3a8b317f3509cb3da970d6363f30db6c', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-06 20:40:38.230632 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8bea39c4f1ea6542bd9da9807c84f27403ed34e330890e69cfa943f345e6204f', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-06 20:40:38.230639 | orchestrator | skipping: [testbed-node-3] => (item={'id': '053c4f686c7b82f943e91815172f2e0eebc3998ace926f3057909de36d4cfe73', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-06 20:40:38.230650 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd87d0bcbddfeb9f7055b966425fad4a98e76293002aed2c11a42c8dac94be154', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-06 20:40:38.230661 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0ee7b2f2d82a97ef7a3e38ece40169168effc804c7ae65757677841db4170ffa', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-06 20:40:38.230668 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e9f13b8503c86cc0f163fca3713858a44b6b6f541f64647c552f1722532b3d27', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-06 20:40:38.230676 | orchestrator | ok: [testbed-node-3] => (item={'id': '58fcc458ffcebf332aa608326bd82e510d43b2b278006de118970a54ee9ec5d5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-06 20:40:38.230685 | orchestrator | ok: [testbed-node-3] => (item={'id': '5ebe987c3d20c5c02640102052e8513d001e8edbc12d015f3165677b5a7d236d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-06 20:40:38.230692 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c60e0d469f59e3acc24dabe7886eb60442be73a5fb430d57e8e7480500a6f83e', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-07-06 20:40:38.230700 | orchestrator | skipping: [testbed-node-3] => (item={'id': '11e1d1d67caa1dc812a4d547c966ce74d3fe580c97833fb23b2356c22c95090f', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-06 20:40:38.230739 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6bf4e1fd472532dbc68470681a24e45f5a7a4443d83a3650d9c0392eb5b242da', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-06 20:40:38.230749 | orchestrator | skipping: [testbed-node-3] => (item={'id': '68057f1ad3a99f5f4e6a47466285289973ca1b6693cadc0126f60b42555e6292', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-06 20:40:38.230757 | orchestrator | skipping: [testbed-node-3] => (item={'id': '77fd632fe70321816eef63845bd15e6da87a8aca9ef56a57f02a770a170be17e', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-06 20:40:38.230769 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9713ddc577f3a816e381e3a53f19ae72e20db27d513b1c909a7f3a9b515f495e', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-06 20:40:38.230777 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4fbbbb397e1d50ec8e7de2ee7b1bddabc5fed2d296bd794e4579f087bfd98491', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-07-06 20:40:38.230790 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b2de2dae6428b12889dd6d089c46ced9be91df8a68c6e03ecb29c88675656668', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-06 20:40:38.479925 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b0ea36f14ffad0881cd6a21cddcca41d9c4f06aec077e486df5de429db94874b', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-06 20:40:38.480031 | orchestrator | skipping: [testbed-node-4] => (item={'id': '22c77d964ab5a82ee04be6c6f4bb15df1d08f2b1e277607199781e677ebf684b', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:40:38.480048 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b4352bd5f3c8cb59e446f3cf10144a5ccd2cfc3faf39381e5c30e3607e37d9b4', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:40:38.480061 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f450e767765a874b0326b2787b20d70cb76a6782e4641ca1dbd7e34ca673acb4', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-06 20:40:38.480074 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ffd7cc24be899352c8124b1a7a9d7d72acaf3576969761be7b7c4c448da0273e', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-06 20:40:38.480086 | orchestrator | skipping: [testbed-node-4] => (item={'id': '384c8c9d05f1c865a3e1b179b855c9eca3b3bed57d6aaf52839046c0b4a3be6c', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-06 20:40:38.480097 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd15b71863cec08cf91867c528bc839fca7006768cb07e25061f380211516e871', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-06 20:40:38.480109 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1893160418c647f077fd17a81e3f7f87cdb9f725342c64a94366d20d5cdad919', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-06 20:40:38.480141 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1fa2378b54e8f9171b3c1ec6dc4872ff7947dae8979714f76fac0feabfbb1303', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-06 20:40:38.480153 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2357aea2b52c59af2147be247a1910ce40a83479661d64e38c11063440e9ecde', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-06 20:40:38.480165 | orchestrator | ok: [testbed-node-4] => (item={'id': '779a5df381cd18fb6ba07cfa9ca14832db9f0b5ca1db7bad4b8f1ee4310823fe', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-06 20:40:38.480196 | orchestrator | ok: [testbed-node-4] => (item={'id': '1920826fc0d5d942e160806201e18e18632e0ccf8ced0179ffe0e21ebc04e57f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-06 20:40:38.480208 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ab0bf08c20fdf4c21d52df4896493cc98049176db060bf9d8aa61d5e00cbd1cd', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-07-06 20:40:38.480221 | orchestrator | skipping: [testbed-node-4] => (item={'id': '98ea3dfe0eea9dcbdaee8359b821d2b2685dcb0dab821f8bf294e3df8fca41ec', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-06 20:40:38.480232 | orchestrator | skipping: [testbed-node-4] => (item={'id': '457402ae4e28d3b8c288196c40fafe3cb67bd0d84bc734a745a2e2096c805bab', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-06 20:40:38.480260 | orchestrator | skipping: [testbed-node-4] => (item={'id': '085bd42c16d40f89affac77596abbf4e098a86e6fb5fc0339ce46592c4c6321f', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-06 20:40:38.480273 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1f7609bfafd483b534c9f3f526c1f79f955f1e736f26fb7cfc2e1fe36bcd3844', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-06 20:40:38.480284 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b06a8aba1cfc456d27df3ee0e3a820a6dea7641960d4eff9f2040fbe6ef662aa', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-06 20:40:38.480296 | orchestrator | skipping: [testbed-node-5] => (item={'id': '48cd65c427e49ca5a657c788a3d66b4b6676993cb44412b72c1c584de1bcde7a', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-07-06 20:40:38.480307 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c3c39b60d3b1d8dc06f5716798e4f86477cb1f72893f20f5dcee01a08a66a90c', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-07-06 20:40:38.480324 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b35d35fb93cc119c8c879f43dfa7c30865ab41d7b6348d249482a5802de19cb9', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-07-06 20:40:38.480336 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8950f04b7bca750c3c4d077fdd076ebc08e4757d1a75d837611ae5e4fe86ecf1', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:40:38.480348 | orchestrator | skipping: [testbed-node-5] => (item={'id': '26dec10dba8bdd6312795c38cb9cae79541296e8f1b24d1228a561e25888501f', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-06 20:40:38.480359 | orchestrator | skipping: [testbed-node-5] => (item={'id': '11b68c98d257dbff82d1e151e2fadd32a81ae7ddcf257deaea15d402a44f84dc', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-07-06 20:40:38.480371 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cf2f5161de8da6be3a7992ecea31982790bac39faa3c87a7da9d8ebc0c556023', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-06 20:40:38.480389 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0cfaa3b4f0fb44c493a64f543edb72512a819d979ec0308295db2a7de4911dca', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-06 20:40:38.480400 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'da9c956f1812b8472f7dfd20c17f79c0416651720c6bcafb5f6d3a7b25a8bc63', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-06 20:40:38.480411 | orchestrator | skipping: [testbed-node-5] => (item={'id': '039b8034deff638d19bec92486aee20fc20c4eda8a2e947b6843cebd5c714cb9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-07-06 20:40:38.480423 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f9eb4da32f425e05c5d257fb3d15ebd956ead27cbada976ff684a8381b5640e4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-06 20:40:38.480434 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9b1902917e5fe083f9a50f1bce5251eeb4432d652eade01f93eedf4c54eee2f9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-06 20:40:38.480453 | orchestrator | ok: [testbed-node-5] => (item={'id': 'd31a35bf56017d70e8d9c7234f830414f3e7358fddcfa7054ffa4eacf47b1735', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-06 20:40:46.344586 | orchestrator | ok: [testbed-node-5] => (item={'id': 'b7e4e93bb48fe582ce507af69a80d617ac1e13d5b37d3d2656a1d86ecf2a94e9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-07-06 20:40:46.344669 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'de62f2f15288f89ba91969358f4789e4bc8e6ac95ac2d70a050fe449f0670701', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-07-06 20:40:46.344678 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bbe8047335e459c4c2d8e499f60eaafc5824a21a914564709ceb71122985c073', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-06 20:40:46.344685 | orchestrator | skipping: [testbed-node-5] => (item={'id': '93c6233f211e547dac0d7953cc0b6aba0b62810977d7d3cce672f24997fe7723', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-07-06 20:40:46.344704 | orchestrator | skipping: [testbed-node-5] => (item={'id': '690913e4282f564874f55fefed120a4e4a5ce680dd9d46b7bbcd668b519ccb5c', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-06 20:40:46.344709 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7596e393bd401d4e8e2f8407a874e3c8272152ea484df5129cdbe4a3586dee65', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-07-06 20:40:46.344714 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7465af6d7cf0e17e5ee3afd7f9dd46556b06dd55a3651e95f2ff0114999dd8db', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-06 20:40:46.344751 | orchestrator | 2025-07-06 20:40:46.344758 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-07-06 20:40:46.344763 | orchestrator | Sunday 06 July 2025 20:40:38 +0000 (0:00:00.500) 0:00:04.810 *********** 2025-07-06 20:40:46.344768 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.344786 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:46.344790 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:46.344795 | orchestrator | 2025-07-06 20:40:46.344800 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-07-06 20:40:46.344804 | orchestrator | Sunday 06 July 2025 20:40:38 +0000 (0:00:00.314) 0:00:05.125 *********** 2025-07-06 20:40:46.344809 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:46.344815 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:40:46.344819 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:40:46.344823 | orchestrator | 2025-07-06 20:40:46.344828 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-07-06 20:40:46.344832 | orchestrator | Sunday 06 July 2025 20:40:39 +0000 (0:00:00.284) 0:00:05.409 *********** 2025-07-06 20:40:46.344836 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.344841 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:46.344845 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:46.344849 | orchestrator | 2025-07-06 20:40:46.344854 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:40:46.344858 | orchestrator | Sunday 06 July 2025 20:40:39 +0000 (0:00:00.480) 0:00:05.890 *********** 2025-07-06 20:40:46.344862 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.344866 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:46.344871 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:46.344875 | orchestrator | 2025-07-06 20:40:46.344879 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-07-06 20:40:46.344884 | orchestrator | Sunday 06 July 2025 20:40:39 +0000 (0:00:00.284) 0:00:06.174 *********** 2025-07-06 20:40:46.344888 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-07-06 20:40:46.344894 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-07-06 20:40:46.344898 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:46.344903 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-07-06 20:40:46.344907 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-07-06 20:40:46.344912 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:40:46.344916 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-07-06 20:40:46.344920 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-07-06 20:40:46.344925 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:40:46.344929 | orchestrator | 2025-07-06 20:40:46.344933 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-07-06 20:40:46.344938 | orchestrator | Sunday 06 July 2025 20:40:40 +0000 (0:00:00.309) 0:00:06.484 *********** 2025-07-06 20:40:46.344942 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.344947 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:46.344951 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:46.344955 | orchestrator | 2025-07-06 20:40:46.344970 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-06 20:40:46.344975 | orchestrator | Sunday 06 July 2025 20:40:40 +0000 (0:00:00.306) 0:00:06.790 *********** 2025-07-06 20:40:46.344979 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:46.344984 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:40:46.344988 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:40:46.344992 | orchestrator | 2025-07-06 20:40:46.344997 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-06 20:40:46.345001 | orchestrator | Sunday 06 July 2025 20:40:40 +0000 (0:00:00.476) 0:00:07.266 *********** 2025-07-06 20:40:46.345006 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:46.345010 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:40:46.345014 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:40:46.345019 | orchestrator | 2025-07-06 20:40:46.345027 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-07-06 20:40:46.345031 | orchestrator | Sunday 06 July 2025 20:40:41 +0000 (0:00:00.311) 0:00:07.578 *********** 2025-07-06 20:40:46.345035 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.345040 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:46.345045 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:46.345049 | orchestrator | 2025-07-06 20:40:46.345053 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:40:46.345058 | orchestrator | Sunday 06 July 2025 20:40:41 +0000 (0:00:00.285) 0:00:07.864 *********** 2025-07-06 20:40:46.345062 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:46.345067 | orchestrator | 2025-07-06 20:40:46.345071 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:40:46.345075 | orchestrator | Sunday 06 July 2025 20:40:41 +0000 (0:00:00.235) 0:00:08.100 *********** 2025-07-06 20:40:46.345080 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:46.345084 | orchestrator | 2025-07-06 20:40:46.345089 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:40:46.345093 | orchestrator | Sunday 06 July 2025 20:40:41 +0000 (0:00:00.236) 0:00:08.336 *********** 2025-07-06 20:40:46.345097 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:46.345102 | orchestrator | 2025-07-06 20:40:46.345106 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:46.345111 | orchestrator | Sunday 06 July 2025 20:40:42 +0000 (0:00:00.244) 0:00:08.580 *********** 2025-07-06 20:40:46.345115 | orchestrator | 2025-07-06 20:40:46.345119 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:46.345124 | orchestrator | Sunday 06 July 2025 20:40:42 +0000 (0:00:00.064) 0:00:08.645 *********** 2025-07-06 20:40:46.345128 | orchestrator | 2025-07-06 20:40:46.345132 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:46.345137 | orchestrator | Sunday 06 July 2025 20:40:42 +0000 (0:00:00.061) 0:00:08.707 *********** 2025-07-06 20:40:46.345141 | orchestrator | 2025-07-06 20:40:46.345145 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:40:46.345150 | orchestrator | Sunday 06 July 2025 20:40:42 +0000 (0:00:00.238) 0:00:08.945 *********** 2025-07-06 20:40:46.345154 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:46.345158 | orchestrator | 2025-07-06 20:40:46.345163 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-07-06 20:40:46.345167 | orchestrator | Sunday 06 July 2025 20:40:42 +0000 (0:00:00.249) 0:00:09.195 *********** 2025-07-06 20:40:46.345171 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:46.345176 | orchestrator | 2025-07-06 20:40:46.345180 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:40:46.345184 | orchestrator | Sunday 06 July 2025 20:40:43 +0000 (0:00:00.245) 0:00:09.441 *********** 2025-07-06 20:40:46.345189 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.345193 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:46.345197 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:46.345202 | orchestrator | 2025-07-06 20:40:46.345206 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-07-06 20:40:46.345210 | orchestrator | Sunday 06 July 2025 20:40:43 +0000 (0:00:00.293) 0:00:09.735 *********** 2025-07-06 20:40:46.345215 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.345219 | orchestrator | 2025-07-06 20:40:46.345223 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-07-06 20:40:46.345228 | orchestrator | Sunday 06 July 2025 20:40:43 +0000 (0:00:00.220) 0:00:09.956 *********** 2025-07-06 20:40:46.345232 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-06 20:40:46.345237 | orchestrator | 2025-07-06 20:40:46.345241 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-07-06 20:40:46.345245 | orchestrator | Sunday 06 July 2025 20:40:45 +0000 (0:00:01.584) 0:00:11.540 *********** 2025-07-06 20:40:46.345250 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.345257 | orchestrator | 2025-07-06 20:40:46.345261 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-07-06 20:40:46.345291 | orchestrator | Sunday 06 July 2025 20:40:45 +0000 (0:00:00.133) 0:00:11.673 *********** 2025-07-06 20:40:46.345296 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.345301 | orchestrator | 2025-07-06 20:40:46.345305 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-07-06 20:40:46.345309 | orchestrator | Sunday 06 July 2025 20:40:45 +0000 (0:00:00.288) 0:00:11.961 *********** 2025-07-06 20:40:46.345314 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:46.345318 | orchestrator | 2025-07-06 20:40:46.345323 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-07-06 20:40:46.345327 | orchestrator | Sunday 06 July 2025 20:40:45 +0000 (0:00:00.109) 0:00:12.071 *********** 2025-07-06 20:40:46.345331 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.345336 | orchestrator | 2025-07-06 20:40:46.345340 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:40:46.345344 | orchestrator | Sunday 06 July 2025 20:40:45 +0000 (0:00:00.139) 0:00:12.211 *********** 2025-07-06 20:40:46.345349 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:46.345353 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:46.345357 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:46.345362 | orchestrator | 2025-07-06 20:40:46.345366 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-07-06 20:40:46.345373 | orchestrator | Sunday 06 July 2025 20:40:46 +0000 (0:00:00.474) 0:00:12.686 *********** 2025-07-06 20:40:58.605102 | orchestrator | changed: [testbed-node-4] 2025-07-06 20:40:58.605216 | orchestrator | changed: [testbed-node-3] 2025-07-06 20:40:58.605231 | orchestrator | changed: [testbed-node-5] 2025-07-06 20:40:58.605243 | orchestrator | 2025-07-06 20:40:58.605256 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-07-06 20:40:58.605269 | orchestrator | Sunday 06 July 2025 20:40:48 +0000 (0:00:02.401) 0:00:15.087 *********** 2025-07-06 20:40:58.605280 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:58.605292 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:58.605303 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:58.605314 | orchestrator | 2025-07-06 20:40:58.605325 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-07-06 20:40:58.605336 | orchestrator | Sunday 06 July 2025 20:40:49 +0000 (0:00:00.321) 0:00:15.408 *********** 2025-07-06 20:40:58.605347 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:58.605359 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:58.605369 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:58.605381 | orchestrator | 2025-07-06 20:40:58.605397 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-07-06 20:40:58.605416 | orchestrator | Sunday 06 July 2025 20:40:49 +0000 (0:00:00.476) 0:00:15.884 *********** 2025-07-06 20:40:58.605435 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:58.605454 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:40:58.605472 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:40:58.605490 | orchestrator | 2025-07-06 20:40:58.605517 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-07-06 20:40:58.605538 | orchestrator | Sunday 06 July 2025 20:40:50 +0000 (0:00:00.492) 0:00:16.376 *********** 2025-07-06 20:40:58.605556 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:58.605575 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:58.605615 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:58.605636 | orchestrator | 2025-07-06 20:40:58.605655 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-07-06 20:40:58.605674 | orchestrator | Sunday 06 July 2025 20:40:50 +0000 (0:00:00.296) 0:00:16.672 *********** 2025-07-06 20:40:58.605693 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:58.605713 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:40:58.605763 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:40:58.605783 | orchestrator | 2025-07-06 20:40:58.605828 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-07-06 20:40:58.605849 | orchestrator | Sunday 06 July 2025 20:40:50 +0000 (0:00:00.291) 0:00:16.964 *********** 2025-07-06 20:40:58.605868 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:58.605893 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:40:58.605918 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:40:58.605936 | orchestrator | 2025-07-06 20:40:58.605954 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-06 20:40:58.605971 | orchestrator | Sunday 06 July 2025 20:40:50 +0000 (0:00:00.263) 0:00:17.227 *********** 2025-07-06 20:40:58.605989 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:58.606006 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:58.606101 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:58.606122 | orchestrator | 2025-07-06 20:40:58.606139 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-07-06 20:40:58.606158 | orchestrator | Sunday 06 July 2025 20:40:51 +0000 (0:00:00.697) 0:00:17.924 *********** 2025-07-06 20:40:58.606220 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:58.606238 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:58.606256 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:58.606274 | orchestrator | 2025-07-06 20:40:58.606292 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-07-06 20:40:58.606311 | orchestrator | Sunday 06 July 2025 20:40:52 +0000 (0:00:00.458) 0:00:18.383 *********** 2025-07-06 20:40:58.606341 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:58.606360 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:58.606377 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:58.606395 | orchestrator | 2025-07-06 20:40:58.606412 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-07-06 20:40:58.606430 | orchestrator | Sunday 06 July 2025 20:40:52 +0000 (0:00:00.305) 0:00:18.688 *********** 2025-07-06 20:40:58.606448 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:58.606466 | orchestrator | skipping: [testbed-node-4] 2025-07-06 20:40:58.606484 | orchestrator | skipping: [testbed-node-5] 2025-07-06 20:40:58.606502 | orchestrator | 2025-07-06 20:40:58.606519 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-07-06 20:40:58.606538 | orchestrator | Sunday 06 July 2025 20:40:52 +0000 (0:00:00.282) 0:00:18.971 *********** 2025-07-06 20:40:58.606556 | orchestrator | ok: [testbed-node-3] 2025-07-06 20:40:58.606573 | orchestrator | ok: [testbed-node-4] 2025-07-06 20:40:58.606592 | orchestrator | ok: [testbed-node-5] 2025-07-06 20:40:58.606610 | orchestrator | 2025-07-06 20:40:58.606629 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-06 20:40:58.606648 | orchestrator | Sunday 06 July 2025 20:40:53 +0000 (0:00:00.477) 0:00:19.448 *********** 2025-07-06 20:40:58.606667 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:58.606686 | orchestrator | 2025-07-06 20:40:58.606706 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-06 20:40:58.606725 | orchestrator | Sunday 06 July 2025 20:40:53 +0000 (0:00:00.290) 0:00:19.738 *********** 2025-07-06 20:40:58.606774 | orchestrator | skipping: [testbed-node-3] 2025-07-06 20:40:58.606794 | orchestrator | 2025-07-06 20:40:58.606813 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-06 20:40:58.606831 | orchestrator | Sunday 06 July 2025 20:40:53 +0000 (0:00:00.246) 0:00:19.984 *********** 2025-07-06 20:40:58.606851 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:58.606863 | orchestrator | 2025-07-06 20:40:58.606874 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-06 20:40:58.606885 | orchestrator | Sunday 06 July 2025 20:40:55 +0000 (0:00:01.620) 0:00:21.605 *********** 2025-07-06 20:40:58.606896 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:58.606906 | orchestrator | 2025-07-06 20:40:58.606917 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-06 20:40:58.606946 | orchestrator | Sunday 06 July 2025 20:40:55 +0000 (0:00:00.249) 0:00:21.854 *********** 2025-07-06 20:40:58.606982 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:58.606993 | orchestrator | 2025-07-06 20:40:58.607004 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:58.607015 | orchestrator | Sunday 06 July 2025 20:40:55 +0000 (0:00:00.241) 0:00:22.095 *********** 2025-07-06 20:40:58.607026 | orchestrator | 2025-07-06 20:40:58.607037 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:58.607048 | orchestrator | Sunday 06 July 2025 20:40:55 +0000 (0:00:00.076) 0:00:22.172 *********** 2025-07-06 20:40:58.607059 | orchestrator | 2025-07-06 20:40:58.607069 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-06 20:40:58.607080 | orchestrator | Sunday 06 July 2025 20:40:55 +0000 (0:00:00.064) 0:00:22.236 *********** 2025-07-06 20:40:58.607091 | orchestrator | 2025-07-06 20:40:58.607102 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-06 20:40:58.607113 | orchestrator | Sunday 06 July 2025 20:40:55 +0000 (0:00:00.069) 0:00:22.305 *********** 2025-07-06 20:40:58.607124 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-06 20:40:58.607135 | orchestrator | 2025-07-06 20:40:58.607146 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-06 20:40:58.607157 | orchestrator | Sunday 06 July 2025 20:40:57 +0000 (0:00:01.463) 0:00:23.769 *********** 2025-07-06 20:40:58.607167 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-07-06 20:40:58.607178 | orchestrator |  "msg": [ 2025-07-06 20:40:58.607190 | orchestrator |  "Validator run completed.", 2025-07-06 20:40:58.607201 | orchestrator |  "You can find the report file here:", 2025-07-06 20:40:58.607221 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-07-06T20:40:34+00:00-report.json", 2025-07-06 20:40:58.607234 | orchestrator |  "on the following host:", 2025-07-06 20:40:58.607245 | orchestrator |  "testbed-manager" 2025-07-06 20:40:58.607256 | orchestrator |  ] 2025-07-06 20:40:58.607267 | orchestrator | } 2025-07-06 20:40:58.607279 | orchestrator | 2025-07-06 20:40:58.607290 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:40:58.607301 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-07-06 20:40:58.607315 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-06 20:40:58.607326 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-06 20:40:58.607336 | orchestrator | 2025-07-06 20:40:58.607347 | orchestrator | 2025-07-06 20:40:58.607358 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:40:58.607369 | orchestrator | Sunday 06 July 2025 20:40:58 +0000 (0:00:00.873) 0:00:24.642 *********** 2025-07-06 20:40:58.607380 | orchestrator | =============================================================================== 2025-07-06 20:40:58.607391 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.40s 2025-07-06 20:40:58.607401 | orchestrator | Aggregate test results step one ----------------------------------------- 1.62s 2025-07-06 20:40:58.607412 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.58s 2025-07-06 20:40:58.607423 | orchestrator | Write report file ------------------------------------------------------- 1.46s 2025-07-06 20:40:58.607434 | orchestrator | Create report output directory ------------------------------------------ 0.97s 2025-07-06 20:40:58.607445 | orchestrator | Print report file information ------------------------------------------- 0.87s 2025-07-06 20:40:58.607455 | orchestrator | Prepare test data ------------------------------------------------------- 0.70s 2025-07-06 20:40:58.607474 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-07-06 20:40:58.607485 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.54s 2025-07-06 20:40:58.607496 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2025-07-06 20:40:58.607506 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.49s 2025-07-06 20:40:58.607517 | orchestrator | Set test result to passed if count matches ------------------------------ 0.48s 2025-07-06 20:40:58.607528 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.48s 2025-07-06 20:40:58.607539 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.48s 2025-07-06 20:40:58.607549 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-07-06 20:40:58.607560 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2025-07-06 20:40:58.607571 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-07-06 20:40:58.607582 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.46s 2025-07-06 20:40:58.607592 | orchestrator | Flush handlers ---------------------------------------------------------- 0.37s 2025-07-06 20:40:58.607603 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.32s 2025-07-06 20:40:58.873779 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-07-06 20:40:58.879697 | orchestrator | + set -e 2025-07-06 20:40:58.879773 | orchestrator | + source /opt/manager-vars.sh 2025-07-06 20:40:58.879782 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-06 20:40:58.879789 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-06 20:40:58.879796 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-06 20:40:58.879802 | orchestrator | ++ CEPH_VERSION=reef 2025-07-06 20:40:58.879809 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-06 20:40:58.879816 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-06 20:40:58.879823 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-06 20:40:58.879830 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-06 20:40:58.879836 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-06 20:40:58.879842 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-06 20:40:58.879848 | orchestrator | ++ export ARA=false 2025-07-06 20:40:58.879855 | orchestrator | ++ ARA=false 2025-07-06 20:40:58.879861 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-06 20:40:58.879867 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-06 20:40:58.879874 | orchestrator | ++ export TEMPEST=false 2025-07-06 20:40:58.879880 | orchestrator | ++ TEMPEST=false 2025-07-06 20:40:58.879886 | orchestrator | ++ export IS_ZUUL=true 2025-07-06 20:40:58.879892 | orchestrator | ++ IS_ZUUL=true 2025-07-06 20:40:58.879899 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 20:40:58.879914 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.163 2025-07-06 20:40:58.879921 | orchestrator | ++ export EXTERNAL_API=false 2025-07-06 20:40:58.879935 | orchestrator | ++ EXTERNAL_API=false 2025-07-06 20:40:58.879941 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-06 20:40:58.879948 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-06 20:40:58.879954 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-06 20:40:58.879960 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-06 20:40:58.879966 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-06 20:40:58.879972 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-06 20:40:58.879979 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-06 20:40:58.879985 | orchestrator | + source /etc/os-release 2025-07-06 20:40:58.879991 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-07-06 20:40:58.879997 | orchestrator | ++ NAME=Ubuntu 2025-07-06 20:40:58.880004 | orchestrator | ++ VERSION_ID=24.04 2025-07-06 20:40:58.880010 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-07-06 20:40:58.880016 | orchestrator | ++ VERSION_CODENAME=noble 2025-07-06 20:40:58.880022 | orchestrator | ++ ID=ubuntu 2025-07-06 20:40:58.880029 | orchestrator | ++ ID_LIKE=debian 2025-07-06 20:40:58.880035 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-07-06 20:40:58.880042 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-07-06 20:40:58.880048 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-07-06 20:40:58.880291 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-07-06 20:40:58.880305 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-07-06 20:40:58.880312 | orchestrator | ++ LOGO=ubuntu-logo 2025-07-06 20:40:58.880338 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-07-06 20:40:58.880357 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-07-06 20:40:58.880365 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-06 20:40:58.909161 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-06 20:41:21.061302 | orchestrator | 2025-07-06 20:41:21.061411 | orchestrator | # Status of Elasticsearch 2025-07-06 20:41:21.061428 | orchestrator | 2025-07-06 20:41:21.061441 | orchestrator | + pushd /opt/configuration/contrib 2025-07-06 20:41:21.061454 | orchestrator | + echo 2025-07-06 20:41:21.061466 | orchestrator | + echo '# Status of Elasticsearch' 2025-07-06 20:41:21.061477 | orchestrator | + echo 2025-07-06 20:41:21.061489 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-07-06 20:41:21.252410 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-07-06 20:41:21.252501 | orchestrator | 2025-07-06 20:41:21.252516 | orchestrator | # Status of MariaDB 2025-07-06 20:41:21.252528 | orchestrator | 2025-07-06 20:41:21.252538 | orchestrator | + echo 2025-07-06 20:41:21.252549 | orchestrator | + echo '# Status of MariaDB' 2025-07-06 20:41:21.252559 | orchestrator | + echo 2025-07-06 20:41:21.252568 | orchestrator | + MARIADB_USER=root_shard_0 2025-07-06 20:41:21.252579 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-07-06 20:41:21.323388 | orchestrator | Reading package lists... 2025-07-06 20:41:21.614633 | orchestrator | Building dependency tree... 2025-07-06 20:41:21.615251 | orchestrator | Reading state information... 2025-07-06 20:41:21.963550 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-07-06 20:41:21.963655 | orchestrator | bc set to manually installed. 2025-07-06 20:41:21.963671 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-07-06 20:41:22.547283 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-07-06 20:41:22.547427 | orchestrator | 2025-07-06 20:41:22.547446 | orchestrator | # Status of Prometheus 2025-07-06 20:41:22.547458 | orchestrator | 2025-07-06 20:41:22.547470 | orchestrator | + echo 2025-07-06 20:41:22.548419 | orchestrator | + echo '# Status of Prometheus' 2025-07-06 20:41:22.548507 | orchestrator | + echo 2025-07-06 20:41:22.548523 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-07-06 20:41:22.599044 | orchestrator | Unauthorized 2025-07-06 20:41:22.601544 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-07-06 20:41:22.671406 | orchestrator | Unauthorized 2025-07-06 20:41:22.674503 | orchestrator | 2025-07-06 20:41:22.674555 | orchestrator | # Status of RabbitMQ 2025-07-06 20:41:22.674569 | orchestrator | 2025-07-06 20:41:22.674581 | orchestrator | + echo 2025-07-06 20:41:22.674592 | orchestrator | + echo '# Status of RabbitMQ' 2025-07-06 20:41:22.674603 | orchestrator | + echo 2025-07-06 20:41:22.674615 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-07-06 20:41:23.133991 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-07-06 20:41:23.144249 | orchestrator | 2025-07-06 20:41:23.144338 | orchestrator | # Status of Redis 2025-07-06 20:41:23.144354 | orchestrator | 2025-07-06 20:41:23.144366 | orchestrator | + echo 2025-07-06 20:41:23.144378 | orchestrator | + echo '# Status of Redis' 2025-07-06 20:41:23.144391 | orchestrator | + echo 2025-07-06 20:41:23.144403 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-07-06 20:41:23.150307 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002140s;;;0.000000;10.000000 2025-07-06 20:41:23.150508 | orchestrator | + popd 2025-07-06 20:41:23.150535 | orchestrator | 2025-07-06 20:41:23.150554 | orchestrator | # Create backup of MariaDB database 2025-07-06 20:41:23.150573 | orchestrator | 2025-07-06 20:41:23.150592 | orchestrator | + echo 2025-07-06 20:41:23.150610 | orchestrator | + echo '# Create backup of MariaDB database' 2025-07-06 20:41:23.150629 | orchestrator | + echo 2025-07-06 20:41:23.150684 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-07-06 20:41:25.001410 | orchestrator | 2025-07-06 20:41:24 | INFO  | Task 76dc37b0-97bc-4908-8838-82787f6a85ec (mariadb_backup) was prepared for execution. 2025-07-06 20:41:25.001521 | orchestrator | 2025-07-06 20:41:24 | INFO  | It takes a moment until task 76dc37b0-97bc-4908-8838-82787f6a85ec (mariadb_backup) has been started and output is visible here. 2025-07-06 20:41:52.154952 | orchestrator | 2025-07-06 20:41:52.155092 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-06 20:41:52.155117 | orchestrator | 2025-07-06 20:41:52.155135 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-06 20:41:52.155152 | orchestrator | Sunday 06 July 2025 20:41:28 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-07-06 20:41:52.155168 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:41:52.155186 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:41:52.155203 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:41:52.155218 | orchestrator | 2025-07-06 20:41:52.155235 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-06 20:41:52.155251 | orchestrator | Sunday 06 July 2025 20:41:29 +0000 (0:00:00.309) 0:00:00.483 *********** 2025-07-06 20:41:52.155266 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-06 20:41:52.155283 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-06 20:41:52.155299 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-06 20:41:52.155315 | orchestrator | 2025-07-06 20:41:52.155332 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-06 20:41:52.155347 | orchestrator | 2025-07-06 20:41:52.155363 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-06 20:41:52.155378 | orchestrator | Sunday 06 July 2025 20:41:29 +0000 (0:00:00.572) 0:00:01.056 *********** 2025-07-06 20:41:52.155395 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-06 20:41:52.155413 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-06 20:41:52.155429 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-06 20:41:52.155516 | orchestrator | 2025-07-06 20:41:52.155536 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-06 20:41:52.155556 | orchestrator | Sunday 06 July 2025 20:41:30 +0000 (0:00:00.387) 0:00:01.443 *********** 2025-07-06 20:41:52.155572 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-06 20:41:52.155590 | orchestrator | 2025-07-06 20:41:52.155606 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-07-06 20:41:52.155624 | orchestrator | Sunday 06 July 2025 20:41:30 +0000 (0:00:00.517) 0:00:01.961 *********** 2025-07-06 20:41:52.155643 | orchestrator | ok: [testbed-node-0] 2025-07-06 20:41:52.155660 | orchestrator | ok: [testbed-node-1] 2025-07-06 20:41:52.155677 | orchestrator | ok: [testbed-node-2] 2025-07-06 20:41:52.155694 | orchestrator | 2025-07-06 20:41:52.155712 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-07-06 20:41:52.155729 | orchestrator | Sunday 06 July 2025 20:41:33 +0000 (0:00:02.875) 0:00:04.837 *********** 2025-07-06 20:41:52.155746 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-06 20:41:52.155786 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-07-06 20:41:52.155805 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-06 20:41:52.155849 | orchestrator | mariadb_bootstrap_restart 2025-07-06 20:41:52.155866 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:41:52.155884 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:41:52.155901 | orchestrator | changed: [testbed-node-0] 2025-07-06 20:41:52.155919 | orchestrator | 2025-07-06 20:41:52.155937 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-06 20:41:52.155955 | orchestrator | skipping: no hosts matched 2025-07-06 20:41:52.156006 | orchestrator | 2025-07-06 20:41:52.156025 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-06 20:41:52.156042 | orchestrator | skipping: no hosts matched 2025-07-06 20:41:52.156059 | orchestrator | 2025-07-06 20:41:52.156074 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-06 20:41:52.156091 | orchestrator | skipping: no hosts matched 2025-07-06 20:41:52.156108 | orchestrator | 2025-07-06 20:41:52.156124 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-06 20:41:52.156141 | orchestrator | 2025-07-06 20:41:52.156155 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-06 20:41:52.156170 | orchestrator | Sunday 06 July 2025 20:41:51 +0000 (0:00:17.787) 0:00:22.624 *********** 2025-07-06 20:41:52.156185 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:41:52.156200 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:41:52.156215 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:41:52.156231 | orchestrator | 2025-07-06 20:41:52.156246 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-06 20:41:52.156262 | orchestrator | Sunday 06 July 2025 20:41:51 +0000 (0:00:00.287) 0:00:22.912 *********** 2025-07-06 20:41:52.156278 | orchestrator | skipping: [testbed-node-0] 2025-07-06 20:41:52.156294 | orchestrator | skipping: [testbed-node-1] 2025-07-06 20:41:52.156311 | orchestrator | skipping: [testbed-node-2] 2025-07-06 20:41:52.156328 | orchestrator | 2025-07-06 20:41:52.156344 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:41:52.156362 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-06 20:41:52.156380 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:41:52.156398 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-06 20:41:52.156415 | orchestrator | 2025-07-06 20:41:52.156431 | orchestrator | 2025-07-06 20:41:52.156445 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:41:52.156455 | orchestrator | Sunday 06 July 2025 20:41:51 +0000 (0:00:00.223) 0:00:23.135 *********** 2025-07-06 20:41:52.156465 | orchestrator | =============================================================================== 2025-07-06 20:41:52.156475 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.79s 2025-07-06 20:41:52.156507 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.88s 2025-07-06 20:41:52.156518 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-07-06 20:41:52.156528 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.52s 2025-07-06 20:41:52.156537 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-07-06 20:41:52.156547 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-07-06 20:41:52.156557 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2025-07-06 20:41:52.156567 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.22s 2025-07-06 20:41:52.425211 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-07-06 20:41:52.431223 | orchestrator | + set -e 2025-07-06 20:41:52.431277 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-06 20:41:52.431292 | orchestrator | ++ export INTERACTIVE=false 2025-07-06 20:41:52.431304 | orchestrator | ++ INTERACTIVE=false 2025-07-06 20:41:52.431315 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-06 20:41:52.431326 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-06 20:41:52.431337 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-06 20:41:52.432029 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-06 20:41:52.436912 | orchestrator | 2025-07-06 20:41:52.436942 | orchestrator | # OpenStack endpoints 2025-07-06 20:41:52.436986 | orchestrator | 2025-07-06 20:41:52.436999 | orchestrator | ++ export MANAGER_VERSION=latest 2025-07-06 20:41:52.437011 | orchestrator | ++ MANAGER_VERSION=latest 2025-07-06 20:41:52.437024 | orchestrator | + export OS_CLOUD=admin 2025-07-06 20:41:52.437036 | orchestrator | + OS_CLOUD=admin 2025-07-06 20:41:52.437048 | orchestrator | + echo 2025-07-06 20:41:52.437060 | orchestrator | + echo '# OpenStack endpoints' 2025-07-06 20:41:52.437071 | orchestrator | + echo 2025-07-06 20:41:52.437083 | orchestrator | + openstack endpoint list 2025-07-06 20:41:55.984603 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-06 20:41:55.984742 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-07-06 20:41:55.984787 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-06 20:41:55.984801 | orchestrator | | 014be19486804d22874b040b182d3fa7 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-07-06 20:41:55.984813 | orchestrator | | 0a54492592f04a5b8d201298ca8fb025 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-07-06 20:41:55.984871 | orchestrator | | 10917f73bbd34dbba626bc346fdf2936 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-07-06 20:41:55.984883 | orchestrator | | 19c83c7401634644aa92027cdfd47ce7 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-07-06 20:41:55.984894 | orchestrator | | 1bf7dc87402b4af9a330e6d2cdb535c4 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-06 20:41:55.984906 | orchestrator | | 2f383c9b74b143529988b80bfcd2f975 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-07-06 20:41:55.984917 | orchestrator | | 40686558ded2404a915dab010da59d5a | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-07-06 20:41:55.984928 | orchestrator | | 41bf11f7734d40aa9866d41afe50ee0d | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-06 20:41:55.984939 | orchestrator | | 5656dad4bb3a45a7a57ca6c983963246 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-07-06 20:41:55.984951 | orchestrator | | 56b2852b0fd94d649774eef13f1f349c | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-07-06 20:41:55.984962 | orchestrator | | 6a7cb52a7b994a7cb958e72c6306c0a8 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-07-06 20:41:55.984973 | orchestrator | | 6d9fba9a77aa49059b1b3f6069e43f6c | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-07-06 20:41:55.984984 | orchestrator | | 70a9fea5c46a48e182da423b7207756d | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-07-06 20:41:55.984995 | orchestrator | | 7bb802146c5c407192330e9fd2c1b71e | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-07-06 20:41:55.985007 | orchestrator | | 80da62bde7594e05bd92f8bb706e7a5a | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-07-06 20:41:55.985038 | orchestrator | | 8cf9b5356c7f4593b6e32e24dc2dd5b9 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-07-06 20:41:55.985050 | orchestrator | | a5b705302ac44a39bf9bdad08e323457 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-07-06 20:41:55.985061 | orchestrator | | b631573d346d409f943c3f96736ca1af | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-06 20:41:55.985072 | orchestrator | | bd6424f2247a4e6ab5a883c1866a74c8 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-07-06 20:41:55.985083 | orchestrator | | cf6f392f6e804ab8b0772dc6b07f8b98 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-07-06 20:41:55.985114 | orchestrator | | f09dd0943f964a3b93f7c2a63cfc2c26 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-07-06 20:41:55.985128 | orchestrator | | f7464c26d36640f4a07bc0de369c9509 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-06 20:41:55.985146 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-06 20:41:56.228305 | orchestrator | 2025-07-06 20:41:56.228407 | orchestrator | # Cinder 2025-07-06 20:41:56.228423 | orchestrator | 2025-07-06 20:41:56.228436 | orchestrator | + echo 2025-07-06 20:41:56.228448 | orchestrator | + echo '# Cinder' 2025-07-06 20:41:56.228460 | orchestrator | + echo 2025-07-06 20:41:56.228472 | orchestrator | + openstack volume service list 2025-07-06 20:41:58.872161 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-06 20:41:58.872394 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-07-06 20:41:58.872438 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-06 20:41:58.872454 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-06T20:41:50.000000 | 2025-07-06 20:41:58.872473 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-06T20:41:50.000000 | 2025-07-06 20:41:58.872492 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-06T20:41:50.000000 | 2025-07-06 20:41:58.872519 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-07-06T20:41:49.000000 | 2025-07-06 20:41:58.872543 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-07-06T20:41:50.000000 | 2025-07-06 20:41:58.872563 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-07-06T20:41:51.000000 | 2025-07-06 20:41:58.872582 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-07-06T20:41:55.000000 | 2025-07-06 20:41:58.872601 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-07-06T20:41:55.000000 | 2025-07-06 20:41:58.872619 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-07-06T20:41:55.000000 | 2025-07-06 20:41:58.872637 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-06 20:41:59.117339 | orchestrator | 2025-07-06 20:41:59.117439 | orchestrator | # Neutron 2025-07-06 20:41:59.117455 | orchestrator | 2025-07-06 20:41:59.117467 | orchestrator | + echo 2025-07-06 20:41:59.117479 | orchestrator | + echo '# Neutron' 2025-07-06 20:41:59.117491 | orchestrator | + echo 2025-07-06 20:41:59.117502 | orchestrator | + openstack network agent list 2025-07-06 20:42:02.567446 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-06 20:42:02.567580 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-07-06 20:42:02.567596 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-06 20:42:02.567608 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-07-06 20:42:02.567619 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-07-06 20:42:02.567631 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-07-06 20:42:02.567642 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-07-06 20:42:02.567653 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-07-06 20:42:02.567664 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-07-06 20:42:02.567675 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-06 20:42:02.567686 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-06 20:42:02.567697 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-06 20:42:02.567708 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-06 20:42:02.840927 | orchestrator | + openstack network service provider list 2025-07-06 20:42:05.457118 | orchestrator | +---------------+------+---------+ 2025-07-06 20:42:05.457257 | orchestrator | | Service Type | Name | Default | 2025-07-06 20:42:05.457274 | orchestrator | +---------------+------+---------+ 2025-07-06 20:42:05.457286 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-07-06 20:42:05.457298 | orchestrator | +---------------+------+---------+ 2025-07-06 20:42:05.714749 | orchestrator | 2025-07-06 20:42:05.714947 | orchestrator | # Nova 2025-07-06 20:42:05.714977 | orchestrator | 2025-07-06 20:42:05.714999 | orchestrator | + echo 2025-07-06 20:42:05.715018 | orchestrator | + echo '# Nova' 2025-07-06 20:42:05.715034 | orchestrator | + echo 2025-07-06 20:42:05.715046 | orchestrator | + openstack compute service list 2025-07-06 20:42:08.868501 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-06 20:42:08.868639 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-07-06 20:42:08.868654 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-06 20:42:08.868663 | orchestrator | | 330eff90-d568-4642-9861-233edc181fb4 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-06T20:42:06.000000 | 2025-07-06 20:42:08.868670 | orchestrator | | cd75af25-d17f-4416-aa6a-27b5383093d4 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-06T20:42:01.000000 | 2025-07-06 20:42:08.868677 | orchestrator | | 72ab1375-5ce2-41ee-95f5-53debafff177 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-06T20:42:01.000000 | 2025-07-06 20:42:08.868683 | orchestrator | | 8dd57fa2-128c-4160-8857-c8436bedb317 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-07-06T20:42:01.000000 | 2025-07-06 20:42:08.868690 | orchestrator | | bdceeee7-072a-4573-8581-340c00bc0975 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-07-06T20:42:03.000000 | 2025-07-06 20:42:08.868717 | orchestrator | | 0391ff47-d400-4dcb-8c6b-f9c2fcaf2ec5 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-07-06T20:42:03.000000 | 2025-07-06 20:42:08.868725 | orchestrator | | c5d06420-bf32-4507-b9fb-de7784af6fbb | nova-compute | testbed-node-4 | nova | enabled | up | 2025-07-06T20:42:08.000000 | 2025-07-06 20:42:08.868732 | orchestrator | | fb1bd335-ca67-4290-9a5a-9ec6e8aea2ad | nova-compute | testbed-node-3 | nova | enabled | up | 2025-07-06T20:41:59.000000 | 2025-07-06 20:42:08.868738 | orchestrator | | 65d69b3d-7929-401a-9355-191c156ace6a | nova-compute | testbed-node-5 | nova | enabled | up | 2025-07-06T20:41:59.000000 | 2025-07-06 20:42:08.868745 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-06 20:42:09.119552 | orchestrator | + openstack hypervisor list 2025-07-06 20:42:13.446904 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-06 20:42:13.447016 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-07-06 20:42:13.447032 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-06 20:42:13.447045 | orchestrator | | f5ce833c-1c4c-4d5d-a7ec-6543764675db | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-07-06 20:42:13.447056 | orchestrator | | 2fab9aab-f4a9-4ff3-a338-646737f062f7 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-07-06 20:42:13.447067 | orchestrator | | 1a05f192-6c1a-4327-b211-78abd3913797 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-07-06 20:42:13.447079 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-06 20:42:13.676799 | orchestrator | 2025-07-06 20:42:13.676966 | orchestrator | # Run OpenStack test play 2025-07-06 20:42:13.676985 | orchestrator | 2025-07-06 20:42:13.676997 | orchestrator | + echo 2025-07-06 20:42:13.677009 | orchestrator | + echo '# Run OpenStack test play' 2025-07-06 20:42:13.677022 | orchestrator | + echo 2025-07-06 20:42:13.677033 | orchestrator | + osism apply --environment openstack test 2025-07-06 20:42:15.582395 | orchestrator | 2025-07-06 20:42:15 | INFO  | Trying to run play test in environment openstack 2025-07-06 20:42:25.695774 | orchestrator | 2025-07-06 20:42:25 | INFO  | Task a59c1b57-d5dc-42f1-a3a1-5cf66e6324de (test) was prepared for execution. 2025-07-06 20:42:25.695962 | orchestrator | 2025-07-06 20:42:25 | INFO  | It takes a moment until task a59c1b57-d5dc-42f1-a3a1-5cf66e6324de (test) has been started and output is visible here. 2025-07-06 20:48:13.228886 | orchestrator | 2025-07-06 20:48:13.229173 | orchestrator | PLAY [Create test project] ***************************************************** 2025-07-06 20:48:13.229205 | orchestrator | 2025-07-06 20:48:13.229223 | orchestrator | TASK [Create test domain] ****************************************************** 2025-07-06 20:48:13.229239 | orchestrator | Sunday 06 July 2025 20:42:29 +0000 (0:00:00.075) 0:00:00.075 *********** 2025-07-06 20:48:13.229254 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.229271 | orchestrator | 2025-07-06 20:48:13.229286 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-07-06 20:48:13.229301 | orchestrator | Sunday 06 July 2025 20:42:33 +0000 (0:00:03.498) 0:00:03.574 *********** 2025-07-06 20:48:13.229317 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.229332 | orchestrator | 2025-07-06 20:48:13.229347 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-07-06 20:48:13.229364 | orchestrator | Sunday 06 July 2025 20:42:37 +0000 (0:00:04.105) 0:00:07.680 *********** 2025-07-06 20:48:13.229380 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.229394 | orchestrator | 2025-07-06 20:48:13.229410 | orchestrator | TASK [Create test project] ***************************************************** 2025-07-06 20:48:13.229426 | orchestrator | Sunday 06 July 2025 20:42:43 +0000 (0:00:06.287) 0:00:13.967 *********** 2025-07-06 20:48:13.229494 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.229513 | orchestrator | 2025-07-06 20:48:13.229529 | orchestrator | TASK [Create test user] ******************************************************** 2025-07-06 20:48:13.229544 | orchestrator | Sunday 06 July 2025 20:42:47 +0000 (0:00:04.031) 0:00:17.999 *********** 2025-07-06 20:48:13.229559 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.229573 | orchestrator | 2025-07-06 20:48:13.229587 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-07-06 20:48:13.229602 | orchestrator | Sunday 06 July 2025 20:42:51 +0000 (0:00:04.111) 0:00:22.111 *********** 2025-07-06 20:48:13.229617 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-07-06 20:48:13.229635 | orchestrator | changed: [localhost] => (item=member) 2025-07-06 20:48:13.229653 | orchestrator | changed: [localhost] => (item=creator) 2025-07-06 20:48:13.229670 | orchestrator | 2025-07-06 20:48:13.229684 | orchestrator | TASK [Create test server group] ************************************************ 2025-07-06 20:48:13.229699 | orchestrator | Sunday 06 July 2025 20:43:03 +0000 (0:00:11.724) 0:00:33.836 *********** 2025-07-06 20:48:13.229714 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.229728 | orchestrator | 2025-07-06 20:48:13.229742 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-07-06 20:48:13.229755 | orchestrator | Sunday 06 July 2025 20:43:08 +0000 (0:00:04.779) 0:00:38.616 *********** 2025-07-06 20:48:13.229769 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.229782 | orchestrator | 2025-07-06 20:48:13.229794 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-07-06 20:48:13.229807 | orchestrator | Sunday 06 July 2025 20:43:13 +0000 (0:00:05.168) 0:00:43.784 *********** 2025-07-06 20:48:13.229841 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.229856 | orchestrator | 2025-07-06 20:48:13.229870 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-07-06 20:48:13.229883 | orchestrator | Sunday 06 July 2025 20:43:17 +0000 (0:00:04.086) 0:00:47.871 *********** 2025-07-06 20:48:13.229896 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.229909 | orchestrator | 2025-07-06 20:48:13.229923 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-07-06 20:48:13.229937 | orchestrator | Sunday 06 July 2025 20:43:21 +0000 (0:00:04.306) 0:00:52.177 *********** 2025-07-06 20:48:13.229952 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.229965 | orchestrator | 2025-07-06 20:48:13.229980 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-07-06 20:48:13.229995 | orchestrator | Sunday 06 July 2025 20:43:25 +0000 (0:00:03.871) 0:00:56.049 *********** 2025-07-06 20:48:13.230010 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.230093 | orchestrator | 2025-07-06 20:48:13.230145 | orchestrator | TASK [Create test network topology] ******************************************** 2025-07-06 20:48:13.230160 | orchestrator | Sunday 06 July 2025 20:43:29 +0000 (0:00:04.312) 0:01:00.361 *********** 2025-07-06 20:48:13.230174 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.230187 | orchestrator | 2025-07-06 20:48:13.230199 | orchestrator | TASK [Create test instances] *************************************************** 2025-07-06 20:48:13.230212 | orchestrator | Sunday 06 July 2025 20:43:46 +0000 (0:00:16.474) 0:01:16.836 *********** 2025-07-06 20:48:13.230225 | orchestrator | changed: [localhost] => (item=test) 2025-07-06 20:48:13.230239 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-06 20:48:13.230254 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-06 20:48:13.230266 | orchestrator | 2025-07-06 20:48:13.230280 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-06 20:48:13.230295 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-06 20:48:13.230309 | orchestrator | 2025-07-06 20:48:13.230323 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-06 20:48:13.230337 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-06 20:48:13.230349 | orchestrator | 2025-07-06 20:48:13.230361 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-07-06 20:48:13.230392 | orchestrator | Sunday 06 July 2025 20:46:51 +0000 (0:03:05.248) 0:04:22.084 *********** 2025-07-06 20:48:13.230406 | orchestrator | changed: [localhost] => (item=test) 2025-07-06 20:48:13.230490 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-06 20:48:13.230506 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-06 20:48:13.230521 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-06 20:48:13.230536 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-06 20:48:13.230550 | orchestrator | 2025-07-06 20:48:13.230562 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-07-06 20:48:13.230575 | orchestrator | Sunday 06 July 2025 20:47:14 +0000 (0:00:22.919) 0:04:45.004 *********** 2025-07-06 20:48:13.230588 | orchestrator | changed: [localhost] => (item=test) 2025-07-06 20:48:13.230601 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-06 20:48:13.230613 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-06 20:48:13.230626 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-06 20:48:13.230688 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-06 20:48:13.230703 | orchestrator | 2025-07-06 20:48:13.230715 | orchestrator | TASK [Create test volume] ****************************************************** 2025-07-06 20:48:13.230726 | orchestrator | Sunday 06 July 2025 20:47:46 +0000 (0:00:32.268) 0:05:17.273 *********** 2025-07-06 20:48:13.230737 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.230749 | orchestrator | 2025-07-06 20:48:13.230761 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-07-06 20:48:13.230772 | orchestrator | Sunday 06 July 2025 20:47:54 +0000 (0:00:07.587) 0:05:24.860 *********** 2025-07-06 20:48:13.230784 | orchestrator | changed: [localhost] 2025-07-06 20:48:13.230796 | orchestrator | 2025-07-06 20:48:13.230808 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-07-06 20:48:13.230820 | orchestrator | Sunday 06 July 2025 20:48:07 +0000 (0:00:13.435) 0:05:38.296 *********** 2025-07-06 20:48:13.230833 | orchestrator | ok: [localhost] 2025-07-06 20:48:13.230845 | orchestrator | 2025-07-06 20:48:13.230858 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-07-06 20:48:13.230870 | orchestrator | Sunday 06 July 2025 20:48:12 +0000 (0:00:05.079) 0:05:43.375 *********** 2025-07-06 20:48:13.230882 | orchestrator | ok: [localhost] => { 2025-07-06 20:48:13.230894 | orchestrator |  "msg": "192.168.112.177" 2025-07-06 20:48:13.230907 | orchestrator | } 2025-07-06 20:48:13.230920 | orchestrator | 2025-07-06 20:48:13.230932 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-06 20:48:13.230950 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-06 20:48:13.230964 | orchestrator | 2025-07-06 20:48:13.230977 | orchestrator | 2025-07-06 20:48:13.231004 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-06 20:48:13.231019 | orchestrator | Sunday 06 July 2025 20:48:12 +0000 (0:00:00.038) 0:05:43.414 *********** 2025-07-06 20:48:13.231041 | orchestrator | =============================================================================== 2025-07-06 20:48:13.231055 | orchestrator | Create test instances ------------------------------------------------- 185.25s 2025-07-06 20:48:13.231067 | orchestrator | Add tag to instances --------------------------------------------------- 32.27s 2025-07-06 20:48:13.231079 | orchestrator | Add metadata to instances ---------------------------------------------- 22.92s 2025-07-06 20:48:13.231091 | orchestrator | Create test network topology ------------------------------------------- 16.47s 2025-07-06 20:48:13.231102 | orchestrator | Attach test volume ----------------------------------------------------- 13.44s 2025-07-06 20:48:13.231173 | orchestrator | Add member roles to user test ------------------------------------------ 11.73s 2025-07-06 20:48:13.231185 | orchestrator | Create test volume ------------------------------------------------------ 7.59s 2025-07-06 20:48:13.231196 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.29s 2025-07-06 20:48:13.231207 | orchestrator | Create ssh security group ----------------------------------------------- 5.17s 2025-07-06 20:48:13.231233 | orchestrator | Create floating ip address ---------------------------------------------- 5.08s 2025-07-06 20:48:13.231246 | orchestrator | Create test server group ------------------------------------------------ 4.78s 2025-07-06 20:48:13.231258 | orchestrator | Create test keypair ----------------------------------------------------- 4.31s 2025-07-06 20:48:13.231270 | orchestrator | Create icmp security group ---------------------------------------------- 4.31s 2025-07-06 20:48:13.231281 | orchestrator | Create test user -------------------------------------------------------- 4.11s 2025-07-06 20:48:13.231293 | orchestrator | Create test-admin user -------------------------------------------------- 4.11s 2025-07-06 20:48:13.231304 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.09s 2025-07-06 20:48:13.231316 | orchestrator | Create test project ----------------------------------------------------- 4.03s 2025-07-06 20:48:13.231329 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.87s 2025-07-06 20:48:13.231341 | orchestrator | Create test domain ------------------------------------------------------ 3.50s 2025-07-06 20:48:13.231352 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-07-06 20:48:13.487388 | orchestrator | + server_list 2025-07-06 20:48:13.487490 | orchestrator | + openstack --os-cloud test server list 2025-07-06 20:48:17.124792 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-06 20:48:17.124930 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-07-06 20:48:17.124950 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-06 20:48:17.124963 | orchestrator | | 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 | test-4 | ACTIVE | auto_allocated_network=10.42.0.21, 192.168.112.103 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-06 20:48:17.124978 | orchestrator | | 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 | test-3 | ACTIVE | auto_allocated_network=10.42.0.51, 192.168.112.125 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-06 20:48:17.124997 | orchestrator | | c87662e4-2344-4841-8f6c-78a95db51822 | test-2 | ACTIVE | auto_allocated_network=10.42.0.56, 192.168.112.187 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-06 20:48:17.125018 | orchestrator | | 72f77100-c5e7-43f6-994d-eb88ae94aab0 | test-1 | ACTIVE | auto_allocated_network=10.42.0.37, 192.168.112.133 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-06 20:48:17.125037 | orchestrator | | 811f2d63-b8d7-484b-b1ff-e4b198e2d293 | test | ACTIVE | auto_allocated_network=10.42.0.46, 192.168.112.177 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-06 20:48:17.125049 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-06 20:48:17.386433 | orchestrator | + openstack --os-cloud test server show test 2025-07-06 20:48:20.609290 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:20.609395 | orchestrator | | Field | Value | 2025-07-06 20:48:20.609411 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:20.609449 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-06 20:48:20.609462 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-06 20:48:20.609473 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-06 20:48:20.609485 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-07-06 20:48:20.609496 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-06 20:48:20.609507 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-06 20:48:20.609518 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-06 20:48:20.609529 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-06 20:48:20.609558 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-06 20:48:20.609570 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-06 20:48:20.609582 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-06 20:48:20.609604 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-06 20:48:20.609616 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-06 20:48:20.609627 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-06 20:48:20.609639 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-06 20:48:20.609650 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-06T20:44:17.000000 | 2025-07-06 20:48:20.609661 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-06 20:48:20.609673 | orchestrator | | accessIPv4 | | 2025-07-06 20:48:20.609684 | orchestrator | | accessIPv6 | | 2025-07-06 20:48:20.609695 | orchestrator | | addresses | auto_allocated_network=10.42.0.46, 192.168.112.177 | 2025-07-06 20:48:20.609713 | orchestrator | | config_drive | | 2025-07-06 20:48:20.609724 | orchestrator | | created | 2025-07-06T20:43:55Z | 2025-07-06 20:48:20.609744 | orchestrator | | description | None | 2025-07-06 20:48:20.609758 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-06 20:48:20.609771 | orchestrator | | hostId | 143bec46de2be37c8d09b4c43791b361b526ba646ff738e4640e6640 | 2025-07-06 20:48:20.609784 | orchestrator | | host_status | None | 2025-07-06 20:48:20.609797 | orchestrator | | id | 811f2d63-b8d7-484b-b1ff-e4b198e2d293 | 2025-07-06 20:48:20.609809 | orchestrator | | image | Cirros 0.6.2 (6f08b0de-9f0d-445a-945f-03ff7e037f26) | 2025-07-06 20:48:20.609822 | orchestrator | | key_name | test | 2025-07-06 20:48:20.609835 | orchestrator | | locked | False | 2025-07-06 20:48:20.609848 | orchestrator | | locked_reason | None | 2025-07-06 20:48:20.609867 | orchestrator | | name | test | 2025-07-06 20:48:20.609886 | orchestrator | | pinned_availability_zone | None | 2025-07-06 20:48:20.609907 | orchestrator | | progress | 0 | 2025-07-06 20:48:20.609921 | orchestrator | | project_id | b4b05305417647db9adbf38f3a7c87a5 | 2025-07-06 20:48:20.609938 | orchestrator | | properties | hostname='test' | 2025-07-06 20:48:20.609951 | orchestrator | | security_groups | name='ssh' | 2025-07-06 20:48:20.609964 | orchestrator | | | name='icmp' | 2025-07-06 20:48:20.609977 | orchestrator | | server_groups | None | 2025-07-06 20:48:20.609990 | orchestrator | | status | ACTIVE | 2025-07-06 20:48:20.610002 | orchestrator | | tags | test | 2025-07-06 20:48:20.610089 | orchestrator | | trusted_image_certificates | None | 2025-07-06 20:48:20.610104 | orchestrator | | updated | 2025-07-06T20:46:56Z | 2025-07-06 20:48:20.610152 | orchestrator | | user_id | 6ef46a54533c459a9f87cfaeb2750877 | 2025-07-06 20:48:20.610165 | orchestrator | | volumes_attached | delete_on_termination='False', id='aa0c0948-98cf-4e29-b295-59da78bd54d6' | 2025-07-06 20:48:20.615322 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:20.871524 | orchestrator | + openstack --os-cloud test server show test-1 2025-07-06 20:48:24.006859 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:24.006996 | orchestrator | | Field | Value | 2025-07-06 20:48:24.007016 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:24.007037 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-06 20:48:24.007057 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-06 20:48:24.007077 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-06 20:48:24.007093 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-07-06 20:48:24.007105 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-06 20:48:24.007199 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-06 20:48:24.007219 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-06 20:48:24.007239 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-06 20:48:24.007282 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-06 20:48:24.007305 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-06 20:48:24.007325 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-06 20:48:24.007345 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-06 20:48:24.007363 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-06 20:48:24.007376 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-06 20:48:24.007389 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-06 20:48:24.007409 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-06T20:44:57.000000 | 2025-07-06 20:48:24.007440 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-06 20:48:24.007456 | orchestrator | | accessIPv4 | | 2025-07-06 20:48:24.007468 | orchestrator | | accessIPv6 | | 2025-07-06 20:48:24.007484 | orchestrator | | addresses | auto_allocated_network=10.42.0.37, 192.168.112.133 | 2025-07-06 20:48:24.007521 | orchestrator | | config_drive | | 2025-07-06 20:48:24.007541 | orchestrator | | created | 2025-07-06T20:44:38Z | 2025-07-06 20:48:24.007554 | orchestrator | | description | None | 2025-07-06 20:48:24.007566 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-06 20:48:24.007585 | orchestrator | | hostId | 24417ff239b52e6f6dbbc81a3470043aa712c4610a4cd9d434c3f9c6 | 2025-07-06 20:48:24.007606 | orchestrator | | host_status | None | 2025-07-06 20:48:24.007625 | orchestrator | | id | 72f77100-c5e7-43f6-994d-eb88ae94aab0 | 2025-07-06 20:48:24.007648 | orchestrator | | image | Cirros 0.6.2 (6f08b0de-9f0d-445a-945f-03ff7e037f26) | 2025-07-06 20:48:24.007662 | orchestrator | | key_name | test | 2025-07-06 20:48:24.007682 | orchestrator | | locked | False | 2025-07-06 20:48:24.007702 | orchestrator | | locked_reason | None | 2025-07-06 20:48:24.007721 | orchestrator | | name | test-1 | 2025-07-06 20:48:24.007747 | orchestrator | | pinned_availability_zone | None | 2025-07-06 20:48:24.007759 | orchestrator | | progress | 0 | 2025-07-06 20:48:24.007772 | orchestrator | | project_id | b4b05305417647db9adbf38f3a7c87a5 | 2025-07-06 20:48:24.007791 | orchestrator | | properties | hostname='test-1' | 2025-07-06 20:48:24.007811 | orchestrator | | security_groups | name='ssh' | 2025-07-06 20:48:24.007838 | orchestrator | | | name='icmp' | 2025-07-06 20:48:24.007850 | orchestrator | | server_groups | None | 2025-07-06 20:48:24.007862 | orchestrator | | status | ACTIVE | 2025-07-06 20:48:24.007873 | orchestrator | | tags | test | 2025-07-06 20:48:24.007884 | orchestrator | | trusted_image_certificates | None | 2025-07-06 20:48:24.007903 | orchestrator | | updated | 2025-07-06T20:47:00Z | 2025-07-06 20:48:24.007921 | orchestrator | | user_id | 6ef46a54533c459a9f87cfaeb2750877 | 2025-07-06 20:48:24.007948 | orchestrator | | volumes_attached | | 2025-07-06 20:48:24.010959 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:24.259315 | orchestrator | + openstack --os-cloud test server show test-2 2025-07-06 20:48:27.305277 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:27.305382 | orchestrator | | Field | Value | 2025-07-06 20:48:27.305420 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:27.305433 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-06 20:48:27.305444 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-06 20:48:27.305455 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-06 20:48:27.305466 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-07-06 20:48:27.305477 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-06 20:48:27.305489 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-06 20:48:27.305516 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-06 20:48:27.305529 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-06 20:48:27.305586 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-06 20:48:27.305599 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-06 20:48:27.305619 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-06 20:48:27.305630 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-06 20:48:27.305641 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-06 20:48:27.305653 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-06 20:48:27.305664 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-06 20:48:27.305675 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-06T20:45:37.000000 | 2025-07-06 20:48:27.305686 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-06 20:48:27.305698 | orchestrator | | accessIPv4 | | 2025-07-06 20:48:27.305715 | orchestrator | | accessIPv6 | | 2025-07-06 20:48:27.305727 | orchestrator | | addresses | auto_allocated_network=10.42.0.56, 192.168.112.187 | 2025-07-06 20:48:27.305744 | orchestrator | | config_drive | | 2025-07-06 20:48:27.305770 | orchestrator | | created | 2025-07-06T20:45:15Z | 2025-07-06 20:48:27.305782 | orchestrator | | description | None | 2025-07-06 20:48:27.305793 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-06 20:48:27.305804 | orchestrator | | hostId | 62c92f6f5eab69a9c64395ccdba0e33a85c7be1d4b2d48889167bde2 | 2025-07-06 20:48:27.305816 | orchestrator | | host_status | None | 2025-07-06 20:48:27.305827 | orchestrator | | id | c87662e4-2344-4841-8f6c-78a95db51822 | 2025-07-06 20:48:27.305838 | orchestrator | | image | Cirros 0.6.2 (6f08b0de-9f0d-445a-945f-03ff7e037f26) | 2025-07-06 20:48:27.305850 | orchestrator | | key_name | test | 2025-07-06 20:48:27.305861 | orchestrator | | locked | False | 2025-07-06 20:48:27.305877 | orchestrator | | locked_reason | None | 2025-07-06 20:48:27.305895 | orchestrator | | name | test-2 | 2025-07-06 20:48:27.305913 | orchestrator | | pinned_availability_zone | None | 2025-07-06 20:48:27.305924 | orchestrator | | progress | 0 | 2025-07-06 20:48:27.305936 | orchestrator | | project_id | b4b05305417647db9adbf38f3a7c87a5 | 2025-07-06 20:48:27.305947 | orchestrator | | properties | hostname='test-2' | 2025-07-06 20:48:27.305958 | orchestrator | | security_groups | name='ssh' | 2025-07-06 20:48:27.305970 | orchestrator | | | name='icmp' | 2025-07-06 20:48:27.305981 | orchestrator | | server_groups | None | 2025-07-06 20:48:27.305992 | orchestrator | | status | ACTIVE | 2025-07-06 20:48:27.306003 | orchestrator | | tags | test | 2025-07-06 20:48:27.306078 | orchestrator | | trusted_image_certificates | None | 2025-07-06 20:48:27.306099 | orchestrator | | updated | 2025-07-06T20:47:05Z | 2025-07-06 20:48:27.306146 | orchestrator | | user_id | 6ef46a54533c459a9f87cfaeb2750877 | 2025-07-06 20:48:27.306165 | orchestrator | | volumes_attached | | 2025-07-06 20:48:27.316443 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:27.570102 | orchestrator | + openstack --os-cloud test server show test-3 2025-07-06 20:48:30.578552 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:30.578665 | orchestrator | | Field | Value | 2025-07-06 20:48:30.578682 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:30.578695 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-06 20:48:30.578707 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-06 20:48:30.578719 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-06 20:48:30.578730 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-07-06 20:48:30.578769 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-06 20:48:30.578781 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-06 20:48:30.578792 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-06 20:48:30.578804 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-06 20:48:30.578833 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-06 20:48:30.578846 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-06 20:48:30.578857 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-06 20:48:30.578869 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-06 20:48:30.578899 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-06 20:48:30.578912 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-06 20:48:30.578931 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-06 20:48:30.578942 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-06T20:46:08.000000 | 2025-07-06 20:48:30.578958 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-06 20:48:30.578970 | orchestrator | | accessIPv4 | | 2025-07-06 20:48:30.578981 | orchestrator | | accessIPv6 | | 2025-07-06 20:48:30.578993 | orchestrator | | addresses | auto_allocated_network=10.42.0.51, 192.168.112.125 | 2025-07-06 20:48:30.579011 | orchestrator | | config_drive | | 2025-07-06 20:48:30.579023 | orchestrator | | created | 2025-07-06T20:45:52Z | 2025-07-06 20:48:30.579034 | orchestrator | | description | None | 2025-07-06 20:48:30.579045 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-06 20:48:30.579056 | orchestrator | | hostId | 143bec46de2be37c8d09b4c43791b361b526ba646ff738e4640e6640 | 2025-07-06 20:48:30.579074 | orchestrator | | host_status | None | 2025-07-06 20:48:30.579085 | orchestrator | | id | 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 | 2025-07-06 20:48:30.579101 | orchestrator | | image | Cirros 0.6.2 (6f08b0de-9f0d-445a-945f-03ff7e037f26) | 2025-07-06 20:48:30.579113 | orchestrator | | key_name | test | 2025-07-06 20:48:30.579166 | orchestrator | | locked | False | 2025-07-06 20:48:30.579178 | orchestrator | | locked_reason | None | 2025-07-06 20:48:30.579189 | orchestrator | | name | test-3 | 2025-07-06 20:48:30.579207 | orchestrator | | pinned_availability_zone | None | 2025-07-06 20:48:30.579219 | orchestrator | | progress | 0 | 2025-07-06 20:48:30.579230 | orchestrator | | project_id | b4b05305417647db9adbf38f3a7c87a5 | 2025-07-06 20:48:30.579241 | orchestrator | | properties | hostname='test-3' | 2025-07-06 20:48:30.579261 | orchestrator | | security_groups | name='ssh' | 2025-07-06 20:48:30.579272 | orchestrator | | | name='icmp' | 2025-07-06 20:48:30.579285 | orchestrator | | server_groups | None | 2025-07-06 20:48:30.579311 | orchestrator | | status | ACTIVE | 2025-07-06 20:48:30.579330 | orchestrator | | tags | test | 2025-07-06 20:48:30.579349 | orchestrator | | trusted_image_certificates | None | 2025-07-06 20:48:30.579367 | orchestrator | | updated | 2025-07-06T20:47:09Z | 2025-07-06 20:48:30.579395 | orchestrator | | user_id | 6ef46a54533c459a9f87cfaeb2750877 | 2025-07-06 20:48:30.579414 | orchestrator | | volumes_attached | | 2025-07-06 20:48:30.583201 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:30.832085 | orchestrator | + openstack --os-cloud test server show test-4 2025-07-06 20:48:33.878417 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:33.878514 | orchestrator | | Field | Value | 2025-07-06 20:48:33.878523 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:33.878529 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-06 20:48:33.878546 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-06 20:48:33.878551 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-06 20:48:33.878556 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-07-06 20:48:33.878561 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-06 20:48:33.878566 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-06 20:48:33.878571 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-06 20:48:33.878577 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-06 20:48:33.878598 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-06 20:48:33.878604 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-06 20:48:33.878609 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-06 20:48:33.878614 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-06 20:48:33.878619 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-06 20:48:33.878628 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-06 20:48:33.878633 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-06 20:48:33.878638 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-06T20:46:41.000000 | 2025-07-06 20:48:33.878643 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-06 20:48:33.878648 | orchestrator | | accessIPv4 | | 2025-07-06 20:48:33.878653 | orchestrator | | accessIPv6 | | 2025-07-06 20:48:33.878662 | orchestrator | | addresses | auto_allocated_network=10.42.0.21, 192.168.112.103 | 2025-07-06 20:48:33.878671 | orchestrator | | config_drive | | 2025-07-06 20:48:33.878676 | orchestrator | | created | 2025-07-06T20:46:25Z | 2025-07-06 20:48:33.878681 | orchestrator | | description | None | 2025-07-06 20:48:33.878686 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-06 20:48:33.878692 | orchestrator | | hostId | 24417ff239b52e6f6dbbc81a3470043aa712c4610a4cd9d434c3f9c6 | 2025-07-06 20:48:33.878699 | orchestrator | | host_status | None | 2025-07-06 20:48:33.878704 | orchestrator | | id | 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 | 2025-07-06 20:48:33.878709 | orchestrator | | image | Cirros 0.6.2 (6f08b0de-9f0d-445a-945f-03ff7e037f26) | 2025-07-06 20:48:33.878715 | orchestrator | | key_name | test | 2025-07-06 20:48:33.878720 | orchestrator | | locked | False | 2025-07-06 20:48:33.878728 | orchestrator | | locked_reason | None | 2025-07-06 20:48:33.878734 | orchestrator | | name | test-4 | 2025-07-06 20:48:33.878742 | orchestrator | | pinned_availability_zone | None | 2025-07-06 20:48:33.878747 | orchestrator | | progress | 0 | 2025-07-06 20:48:33.878752 | orchestrator | | project_id | b4b05305417647db9adbf38f3a7c87a5 | 2025-07-06 20:48:33.878757 | orchestrator | | properties | hostname='test-4' | 2025-07-06 20:48:33.878763 | orchestrator | | security_groups | name='ssh' | 2025-07-06 20:48:33.878768 | orchestrator | | | name='icmp' | 2025-07-06 20:48:33.878773 | orchestrator | | server_groups | None | 2025-07-06 20:48:33.879069 | orchestrator | | status | ACTIVE | 2025-07-06 20:48:33.879078 | orchestrator | | tags | test | 2025-07-06 20:48:33.879087 | orchestrator | | trusted_image_certificates | None | 2025-07-06 20:48:33.879092 | orchestrator | | updated | 2025-07-06T20:47:14Z | 2025-07-06 20:48:33.879101 | orchestrator | | user_id | 6ef46a54533c459a9f87cfaeb2750877 | 2025-07-06 20:48:33.879106 | orchestrator | | volumes_attached | | 2025-07-06 20:48:33.884513 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-06 20:48:34.124643 | orchestrator | + server_ping 2025-07-06 20:48:34.125331 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-06 20:48:34.128864 | orchestrator | ++ tr -d '\r' 2025-07-06 20:48:36.872018 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:48:36.872173 | orchestrator | + ping -c3 192.168.112.133 2025-07-06 20:48:36.893180 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-07-06 20:48:36.893273 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=15.0 ms 2025-07-06 20:48:37.882097 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.61 ms 2025-07-06 20:48:38.883285 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.89 ms 2025-07-06 20:48:38.883362 | orchestrator | 2025-07-06 20:48:38.883369 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-07-06 20:48:38.883374 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:48:38.883379 | orchestrator | rtt min/avg/max/mdev = 1.892/6.514/15.040/6.035 ms 2025-07-06 20:48:38.884144 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:48:38.884234 | orchestrator | + ping -c3 192.168.112.103 2025-07-06 20:48:38.897020 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-07-06 20:48:38.897154 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=8.08 ms 2025-07-06 20:48:39.892656 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.30 ms 2025-07-06 20:48:40.893740 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.56 ms 2025-07-06 20:48:40.894096 | orchestrator | 2025-07-06 20:48:40.894184 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-07-06 20:48:40.894199 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:48:40.894210 | orchestrator | rtt min/avg/max/mdev = 1.557/3.978/8.078/2.914 ms 2025-07-06 20:48:40.894266 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:48:40.894279 | orchestrator | + ping -c3 192.168.112.177 2025-07-06 20:48:40.906830 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2025-07-06 20:48:40.906898 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=8.05 ms 2025-07-06 20:48:41.902511 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.23 ms 2025-07-06 20:48:42.903516 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=2.19 ms 2025-07-06 20:48:42.903623 | orchestrator | 2025-07-06 20:48:42.903639 | orchestrator | --- 192.168.112.177 ping statistics --- 2025-07-06 20:48:42.903653 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-06 20:48:42.903665 | orchestrator | rtt min/avg/max/mdev = 2.186/4.156/8.048/2.752 ms 2025-07-06 20:48:42.903980 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:48:42.904006 | orchestrator | + ping -c3 192.168.112.187 2025-07-06 20:48:42.915415 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2025-07-06 20:48:42.915478 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=7.12 ms 2025-07-06 20:48:43.912500 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.32 ms 2025-07-06 20:48:44.914283 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=2.20 ms 2025-07-06 20:48:44.914397 | orchestrator | 2025-07-06 20:48:44.914415 | orchestrator | --- 192.168.112.187 ping statistics --- 2025-07-06 20:48:44.914429 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:48:44.914440 | orchestrator | rtt min/avg/max/mdev = 2.204/3.881/7.118/2.289 ms 2025-07-06 20:48:44.914575 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:48:44.914676 | orchestrator | + ping -c3 192.168.112.125 2025-07-06 20:48:44.927229 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-07-06 20:48:44.927325 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=8.34 ms 2025-07-06 20:48:45.923311 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.41 ms 2025-07-06 20:48:46.925191 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=1.98 ms 2025-07-06 20:48:46.925302 | orchestrator | 2025-07-06 20:48:46.925324 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-07-06 20:48:46.925344 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:48:46.925363 | orchestrator | rtt min/avg/max/mdev = 1.977/4.241/8.336/2.900 ms 2025-07-06 20:48:46.926072 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-07-06 20:48:46.926109 | orchestrator | + compute_list 2025-07-06 20:48:46.926121 | orchestrator | + osism manage compute list testbed-node-3 2025-07-06 20:48:50.416435 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:48:50.416550 | orchestrator | | ID | Name | Status | 2025-07-06 20:48:50.416564 | orchestrator | |--------------------------------------+--------+----------| 2025-07-06 20:48:50.416577 | orchestrator | | c87662e4-2344-4841-8f6c-78a95db51822 | test-2 | ACTIVE | 2025-07-06 20:48:50.416596 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:48:50.715420 | orchestrator | + osism manage compute list testbed-node-4 2025-07-06 20:48:53.835356 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:48:53.835462 | orchestrator | | ID | Name | Status | 2025-07-06 20:48:53.835477 | orchestrator | |--------------------------------------+--------+----------| 2025-07-06 20:48:53.835489 | orchestrator | | 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 | test-4 | ACTIVE | 2025-07-06 20:48:53.835501 | orchestrator | | 72f77100-c5e7-43f6-994d-eb88ae94aab0 | test-1 | ACTIVE | 2025-07-06 20:48:53.835512 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:48:54.113579 | orchestrator | + osism manage compute list testbed-node-5 2025-07-06 20:48:57.369497 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:48:57.369615 | orchestrator | | ID | Name | Status | 2025-07-06 20:48:57.369651 | orchestrator | |--------------------------------------+--------+----------| 2025-07-06 20:48:57.369688 | orchestrator | | 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 | test-3 | ACTIVE | 2025-07-06 20:48:57.369701 | orchestrator | | 811f2d63-b8d7-484b-b1ff-e4b198e2d293 | test | ACTIVE | 2025-07-06 20:48:57.369712 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:48:57.650269 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-07-06 20:49:00.663077 | orchestrator | 2025-07-06 20:49:00 | INFO  | Live migrating server 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 2025-07-06 20:49:13.786481 | orchestrator | 2025-07-06 20:49:13 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:49:16.156599 | orchestrator | 2025-07-06 20:49:16 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:49:18.663922 | orchestrator | 2025-07-06 20:49:18 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:49:20.937012 | orchestrator | 2025-07-06 20:49:20 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:49:23.318796 | orchestrator | 2025-07-06 20:49:23 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:49:25.583115 | orchestrator | 2025-07-06 20:49:25 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:49:28.055242 | orchestrator | 2025-07-06 20:49:28 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:49:30.286448 | orchestrator | 2025-07-06 20:49:30 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) completed with status ACTIVE 2025-07-06 20:49:30.286525 | orchestrator | 2025-07-06 20:49:30 | INFO  | Live migrating server 72f77100-c5e7-43f6-994d-eb88ae94aab0 2025-07-06 20:49:42.732913 | orchestrator | 2025-07-06 20:49:42 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:49:45.049943 | orchestrator | 2025-07-06 20:49:45 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:49:47.795784 | orchestrator | 2025-07-06 20:49:47 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:49:50.102465 | orchestrator | 2025-07-06 20:49:50 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:49:52.348414 | orchestrator | 2025-07-06 20:49:52 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:49:54.684245 | orchestrator | 2025-07-06 20:49:54 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:49:57.020908 | orchestrator | 2025-07-06 20:49:57 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) completed with status ACTIVE 2025-07-06 20:49:57.302951 | orchestrator | + compute_list 2025-07-06 20:49:57.303050 | orchestrator | + osism manage compute list testbed-node-3 2025-07-06 20:50:00.508855 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:50:00.508985 | orchestrator | | ID | Name | Status | 2025-07-06 20:50:00.509002 | orchestrator | |--------------------------------------+--------+----------| 2025-07-06 20:50:00.509014 | orchestrator | | 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 | test-4 | ACTIVE | 2025-07-06 20:50:00.509038 | orchestrator | | c87662e4-2344-4841-8f6c-78a95db51822 | test-2 | ACTIVE | 2025-07-06 20:50:00.509050 | orchestrator | | 72f77100-c5e7-43f6-994d-eb88ae94aab0 | test-1 | ACTIVE | 2025-07-06 20:50:00.509062 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:50:00.774014 | orchestrator | + osism manage compute list testbed-node-4 2025-07-06 20:50:03.438661 | orchestrator | +------+--------+----------+ 2025-07-06 20:50:03.438767 | orchestrator | | ID | Name | Status | 2025-07-06 20:50:03.438782 | orchestrator | |------+--------+----------| 2025-07-06 20:50:03.438794 | orchestrator | +------+--------+----------+ 2025-07-06 20:50:03.727714 | orchestrator | + osism manage compute list testbed-node-5 2025-07-06 20:50:06.711251 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:50:06.711356 | orchestrator | | ID | Name | Status | 2025-07-06 20:50:06.711370 | orchestrator | |--------------------------------------+--------+----------| 2025-07-06 20:50:06.711381 | orchestrator | | 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 | test-3 | ACTIVE | 2025-07-06 20:50:06.711391 | orchestrator | | 811f2d63-b8d7-484b-b1ff-e4b198e2d293 | test | ACTIVE | 2025-07-06 20:50:06.711402 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:50:06.965571 | orchestrator | + server_ping 2025-07-06 20:50:06.966463 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-06 20:50:06.967029 | orchestrator | ++ tr -d '\r' 2025-07-06 20:50:09.736316 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:50:09.736454 | orchestrator | + ping -c3 192.168.112.133 2025-07-06 20:50:09.750698 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-07-06 20:50:09.750817 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=11.4 ms 2025-07-06 20:50:10.743700 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.67 ms 2025-07-06 20:50:11.745328 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=2.29 ms 2025-07-06 20:50:11.745454 | orchestrator | 2025-07-06 20:50:11.745470 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-07-06 20:50:11.745483 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-06 20:50:11.746397 | orchestrator | rtt min/avg/max/mdev = 2.288/5.438/11.359/4.189 ms 2025-07-06 20:50:11.746498 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:50:11.746513 | orchestrator | + ping -c3 192.168.112.103 2025-07-06 20:50:11.759806 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-07-06 20:50:11.759934 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=8.86 ms 2025-07-06 20:50:12.753679 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.62 ms 2025-07-06 20:50:13.754540 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.89 ms 2025-07-06 20:50:13.754647 | orchestrator | 2025-07-06 20:50:13.754663 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-07-06 20:50:13.754676 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-06 20:50:13.754688 | orchestrator | rtt min/avg/max/mdev = 1.886/4.454/8.862/3.130 ms 2025-07-06 20:50:13.755014 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:50:13.755050 | orchestrator | + ping -c3 192.168.112.177 2025-07-06 20:50:13.769053 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2025-07-06 20:50:13.769135 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=9.44 ms 2025-07-06 20:50:14.764093 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.78 ms 2025-07-06 20:50:15.766005 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=2.29 ms 2025-07-06 20:50:15.766219 | orchestrator | 2025-07-06 20:50:15.766239 | orchestrator | --- 192.168.112.177 ping statistics --- 2025-07-06 20:50:15.766253 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-06 20:50:15.766265 | orchestrator | rtt min/avg/max/mdev = 2.294/4.836/9.435/3.257 ms 2025-07-06 20:50:15.766567 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:50:15.766593 | orchestrator | + ping -c3 192.168.112.187 2025-07-06 20:50:15.780776 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2025-07-06 20:50:15.780902 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=9.60 ms 2025-07-06 20:50:16.775893 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.65 ms 2025-07-06 20:50:17.776867 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=2.06 ms 2025-07-06 20:50:17.776993 | orchestrator | 2025-07-06 20:50:17.777011 | orchestrator | --- 192.168.112.187 ping statistics --- 2025-07-06 20:50:17.777025 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:50:17.777036 | orchestrator | rtt min/avg/max/mdev = 2.055/4.766/9.596/3.423 ms 2025-07-06 20:50:17.777048 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:50:17.777352 | orchestrator | + ping -c3 192.168.112.125 2025-07-06 20:50:17.789454 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-07-06 20:50:17.789543 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=7.72 ms 2025-07-06 20:50:18.786684 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.62 ms 2025-07-06 20:50:19.788820 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=2.42 ms 2025-07-06 20:50:19.788920 | orchestrator | 2025-07-06 20:50:19.788933 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-07-06 20:50:19.788978 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-06 20:50:19.788989 | orchestrator | rtt min/avg/max/mdev = 2.421/4.254/7.720/2.452 ms 2025-07-06 20:50:19.789338 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-07-06 20:50:22.817402 | orchestrator | 2025-07-06 20:50:22 | INFO  | Live migrating server 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 2025-07-06 20:50:33.584235 | orchestrator | 2025-07-06 20:50:33 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:50:35.907326 | orchestrator | 2025-07-06 20:50:35 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:50:38.211069 | orchestrator | 2025-07-06 20:50:38 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:50:40.452471 | orchestrator | 2025-07-06 20:50:40 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:50:42.717367 | orchestrator | 2025-07-06 20:50:42 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:50:44.995200 | orchestrator | 2025-07-06 20:50:44 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:50:47.243158 | orchestrator | 2025-07-06 20:50:47 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:50:49.591148 | orchestrator | 2025-07-06 20:50:49 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) completed with status ACTIVE 2025-07-06 20:50:49.591360 | orchestrator | 2025-07-06 20:50:49 | INFO  | Live migrating server 811f2d63-b8d7-484b-b1ff-e4b198e2d293 2025-07-06 20:51:01.230123 | orchestrator | 2025-07-06 20:51:01 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:51:03.603586 | orchestrator | 2025-07-06 20:51:03 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:51:05.946913 | orchestrator | 2025-07-06 20:51:05 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:51:08.317641 | orchestrator | 2025-07-06 20:51:08 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:51:10.545651 | orchestrator | 2025-07-06 20:51:10 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:51:12.825494 | orchestrator | 2025-07-06 20:51:12 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:51:15.094833 | orchestrator | 2025-07-06 20:51:15 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:51:17.351597 | orchestrator | 2025-07-06 20:51:17 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:51:19.701949 | orchestrator | 2025-07-06 20:51:19 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:51:21.981603 | orchestrator | 2025-07-06 20:51:21 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) completed with status ACTIVE 2025-07-06 20:51:22.261779 | orchestrator | + compute_list 2025-07-06 20:51:22.261906 | orchestrator | + osism manage compute list testbed-node-3 2025-07-06 20:51:25.408534 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:51:25.408639 | orchestrator | | ID | Name | Status | 2025-07-06 20:51:25.408654 | orchestrator | |--------------------------------------+--------+----------| 2025-07-06 20:51:25.408666 | orchestrator | | 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 | test-4 | ACTIVE | 2025-07-06 20:51:25.408677 | orchestrator | | 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 | test-3 | ACTIVE | 2025-07-06 20:51:25.408689 | orchestrator | | c87662e4-2344-4841-8f6c-78a95db51822 | test-2 | ACTIVE | 2025-07-06 20:51:25.408700 | orchestrator | | 72f77100-c5e7-43f6-994d-eb88ae94aab0 | test-1 | ACTIVE | 2025-07-06 20:51:25.408712 | orchestrator | | 811f2d63-b8d7-484b-b1ff-e4b198e2d293 | test | ACTIVE | 2025-07-06 20:51:25.408723 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:51:25.689974 | orchestrator | + osism manage compute list testbed-node-4 2025-07-06 20:51:28.580503 | orchestrator | +------+--------+----------+ 2025-07-06 20:51:28.580620 | orchestrator | | ID | Name | Status | 2025-07-06 20:51:28.580635 | orchestrator | |------+--------+----------| 2025-07-06 20:51:28.580647 | orchestrator | +------+--------+----------+ 2025-07-06 20:51:28.854191 | orchestrator | + osism manage compute list testbed-node-5 2025-07-06 20:51:31.451650 | orchestrator | +------+--------+----------+ 2025-07-06 20:51:31.451758 | orchestrator | | ID | Name | Status | 2025-07-06 20:51:31.451772 | orchestrator | |------+--------+----------| 2025-07-06 20:51:31.451783 | orchestrator | +------+--------+----------+ 2025-07-06 20:51:31.713045 | orchestrator | + server_ping 2025-07-06 20:51:31.714140 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-06 20:51:31.714600 | orchestrator | ++ tr -d '\r' 2025-07-06 20:51:34.510255 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:51:34.510374 | orchestrator | + ping -c3 192.168.112.133 2025-07-06 20:51:34.523450 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-07-06 20:51:34.523541 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=11.6 ms 2025-07-06 20:51:35.516162 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.45 ms 2025-07-06 20:51:36.518137 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.91 ms 2025-07-06 20:51:36.518299 | orchestrator | 2025-07-06 20:51:36.518319 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-07-06 20:51:36.518332 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:51:36.518345 | orchestrator | rtt min/avg/max/mdev = 1.905/5.325/11.622/4.457 ms 2025-07-06 20:51:36.518601 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:51:36.518625 | orchestrator | + ping -c3 192.168.112.103 2025-07-06 20:51:36.531944 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-07-06 20:51:36.532004 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=9.34 ms 2025-07-06 20:51:37.527138 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.72 ms 2025-07-06 20:51:38.529172 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.96 ms 2025-07-06 20:51:38.529301 | orchestrator | 2025-07-06 20:51:38.529315 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-07-06 20:51:38.529327 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-06 20:51:38.529338 | orchestrator | rtt min/avg/max/mdev = 1.964/4.673/9.335/3.310 ms 2025-07-06 20:51:38.529349 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:51:38.529387 | orchestrator | + ping -c3 192.168.112.177 2025-07-06 20:51:38.541736 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2025-07-06 20:51:38.541865 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=8.27 ms 2025-07-06 20:51:39.537518 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.56 ms 2025-07-06 20:51:40.539436 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=2.00 ms 2025-07-06 20:51:40.539610 | orchestrator | 2025-07-06 20:51:40.539628 | orchestrator | --- 192.168.112.177 ping statistics --- 2025-07-06 20:51:40.539641 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:51:40.539653 | orchestrator | rtt min/avg/max/mdev = 1.996/4.274/8.269/2.833 ms 2025-07-06 20:51:40.539695 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:51:40.539709 | orchestrator | + ping -c3 192.168.112.187 2025-07-06 20:51:40.552081 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2025-07-06 20:51:40.552139 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=7.86 ms 2025-07-06 20:51:41.547897 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.20 ms 2025-07-06 20:51:42.549282 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.75 ms 2025-07-06 20:51:42.549385 | orchestrator | 2025-07-06 20:51:42.549402 | orchestrator | --- 192.168.112.187 ping statistics --- 2025-07-06 20:51:42.549415 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:51:42.549426 | orchestrator | rtt min/avg/max/mdev = 1.749/3.935/7.859/2.780 ms 2025-07-06 20:51:42.549439 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:51:42.549450 | orchestrator | + ping -c3 192.168.112.125 2025-07-06 20:51:42.557590 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-07-06 20:51:42.557660 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=5.39 ms 2025-07-06 20:51:43.556484 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.32 ms 2025-07-06 20:51:44.558143 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=1.81 ms 2025-07-06 20:51:44.558337 | orchestrator | 2025-07-06 20:51:44.558358 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-07-06 20:51:44.558372 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:51:44.558384 | orchestrator | rtt min/avg/max/mdev = 1.805/3.173/5.394/1.584 ms 2025-07-06 20:51:44.558395 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-07-06 20:51:47.660118 | orchestrator | 2025-07-06 20:51:47 | INFO  | Live migrating server 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 2025-07-06 20:51:58.860464 | orchestrator | 2025-07-06 20:51:58 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:52:01.198193 | orchestrator | 2025-07-06 20:52:01 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:52:03.448469 | orchestrator | 2025-07-06 20:52:03 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:52:05.724122 | orchestrator | 2025-07-06 20:52:05 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:52:08.075647 | orchestrator | 2025-07-06 20:52:08 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:52:10.401723 | orchestrator | 2025-07-06 20:52:10 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:52:12.673474 | orchestrator | 2025-07-06 20:52:12 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:52:14.990982 | orchestrator | 2025-07-06 20:52:14 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) completed with status ACTIVE 2025-07-06 20:52:14.991957 | orchestrator | 2025-07-06 20:52:14 | INFO  | Live migrating server 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 2025-07-06 20:52:26.891107 | orchestrator | 2025-07-06 20:52:26 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:52:29.233184 | orchestrator | 2025-07-06 20:52:29 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:52:31.574897 | orchestrator | 2025-07-06 20:52:31 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:52:33.978370 | orchestrator | 2025-07-06 20:52:33 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:52:36.264372 | orchestrator | 2025-07-06 20:52:36 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:52:38.589454 | orchestrator | 2025-07-06 20:52:38 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:52:40.936605 | orchestrator | 2025-07-06 20:52:40 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:52:43.303378 | orchestrator | 2025-07-06 20:52:43 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) completed with status ACTIVE 2025-07-06 20:52:43.303536 | orchestrator | 2025-07-06 20:52:43 | INFO  | Live migrating server c87662e4-2344-4841-8f6c-78a95db51822 2025-07-06 20:52:52.968760 | orchestrator | 2025-07-06 20:52:52 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:52:55.312865 | orchestrator | 2025-07-06 20:52:55 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:52:57.681849 | orchestrator | 2025-07-06 20:52:57 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:53:00.046911 | orchestrator | 2025-07-06 20:53:00 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:53:02.342659 | orchestrator | 2025-07-06 20:53:02 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:53:04.697931 | orchestrator | 2025-07-06 20:53:04 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:53:07.036926 | orchestrator | 2025-07-06 20:53:07 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) completed with status ACTIVE 2025-07-06 20:53:07.037034 | orchestrator | 2025-07-06 20:53:07 | INFO  | Live migrating server 72f77100-c5e7-43f6-994d-eb88ae94aab0 2025-07-06 20:53:18.210598 | orchestrator | 2025-07-06 20:53:18 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:53:20.564128 | orchestrator | 2025-07-06 20:53:20 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:53:22.871713 | orchestrator | 2025-07-06 20:53:22 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:53:25.102722 | orchestrator | 2025-07-06 20:53:25 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:53:27.368930 | orchestrator | 2025-07-06 20:53:27 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:53:29.615597 | orchestrator | 2025-07-06 20:53:29 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:53:31.897855 | orchestrator | 2025-07-06 20:53:31 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) completed with status ACTIVE 2025-07-06 20:53:31.897992 | orchestrator | 2025-07-06 20:53:31 | INFO  | Live migrating server 811f2d63-b8d7-484b-b1ff-e4b198e2d293 2025-07-06 20:53:42.019962 | orchestrator | 2025-07-06 20:53:42 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:53:44.396730 | orchestrator | 2025-07-06 20:53:44 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:53:46.768446 | orchestrator | 2025-07-06 20:53:46 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:53:49.144626 | orchestrator | 2025-07-06 20:53:49 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:53:51.470377 | orchestrator | 2025-07-06 20:53:51 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:53:53.766567 | orchestrator | 2025-07-06 20:53:53 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:53:56.020167 | orchestrator | 2025-07-06 20:53:56 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:53:58.314851 | orchestrator | 2025-07-06 20:53:58 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:54:00.567156 | orchestrator | 2025-07-06 20:54:00 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:54:02.894978 | orchestrator | 2025-07-06 20:54:02 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) completed with status ACTIVE 2025-07-06 20:54:03.173068 | orchestrator | + compute_list 2025-07-06 20:54:03.173167 | orchestrator | + osism manage compute list testbed-node-3 2025-07-06 20:54:05.774237 | orchestrator | +------+--------+----------+ 2025-07-06 20:54:05.774412 | orchestrator | | ID | Name | Status | 2025-07-06 20:54:05.774429 | orchestrator | |------+--------+----------| 2025-07-06 20:54:05.774441 | orchestrator | +------+--------+----------+ 2025-07-06 20:54:06.057220 | orchestrator | + osism manage compute list testbed-node-4 2025-07-06 20:54:09.145655 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:54:09.145763 | orchestrator | | ID | Name | Status | 2025-07-06 20:54:09.145779 | orchestrator | |--------------------------------------+--------+----------| 2025-07-06 20:54:09.145791 | orchestrator | | 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 | test-4 | ACTIVE | 2025-07-06 20:54:09.145802 | orchestrator | | 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 | test-3 | ACTIVE | 2025-07-06 20:54:09.145814 | orchestrator | | c87662e4-2344-4841-8f6c-78a95db51822 | test-2 | ACTIVE | 2025-07-06 20:54:09.145825 | orchestrator | | 72f77100-c5e7-43f6-994d-eb88ae94aab0 | test-1 | ACTIVE | 2025-07-06 20:54:09.145837 | orchestrator | | 811f2d63-b8d7-484b-b1ff-e4b198e2d293 | test | ACTIVE | 2025-07-06 20:54:09.145848 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:54:09.407320 | orchestrator | + osism manage compute list testbed-node-5 2025-07-06 20:54:12.024206 | orchestrator | +------+--------+----------+ 2025-07-06 20:54:12.024392 | orchestrator | | ID | Name | Status | 2025-07-06 20:54:12.024411 | orchestrator | |------+--------+----------| 2025-07-06 20:54:12.024423 | orchestrator | +------+--------+----------+ 2025-07-06 20:54:12.283470 | orchestrator | + server_ping 2025-07-06 20:54:12.284779 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-06 20:54:12.284811 | orchestrator | ++ tr -d '\r' 2025-07-06 20:54:15.376997 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:54:15.377109 | orchestrator | + ping -c3 192.168.112.133 2025-07-06 20:54:15.391124 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-07-06 20:54:15.391206 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=11.4 ms 2025-07-06 20:54:16.384724 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=3.05 ms 2025-07-06 20:54:17.385579 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.85 ms 2025-07-06 20:54:17.385700 | orchestrator | 2025-07-06 20:54:17.385726 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-07-06 20:54:17.385745 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:54:17.385762 | orchestrator | rtt min/avg/max/mdev = 1.847/5.432/11.402/4.249 ms 2025-07-06 20:54:17.385780 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:54:17.385798 | orchestrator | + ping -c3 192.168.112.103 2025-07-06 20:54:17.398895 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-07-06 20:54:17.398984 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=8.52 ms 2025-07-06 20:54:18.394564 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.21 ms 2025-07-06 20:54:19.396457 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.81 ms 2025-07-06 20:54:19.396576 | orchestrator | 2025-07-06 20:54:19.396598 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-07-06 20:54:19.396618 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:54:19.396635 | orchestrator | rtt min/avg/max/mdev = 1.811/4.179/8.515/3.070 ms 2025-07-06 20:54:19.396651 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:54:19.396669 | orchestrator | + ping -c3 192.168.112.177 2025-07-06 20:54:19.409625 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2025-07-06 20:54:19.409714 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=8.29 ms 2025-07-06 20:54:20.404747 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.20 ms 2025-07-06 20:54:21.405973 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=2.06 ms 2025-07-06 20:54:21.406138 | orchestrator | 2025-07-06 20:54:21.406155 | orchestrator | --- 192.168.112.177 ping statistics --- 2025-07-06 20:54:21.406169 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-06 20:54:21.406180 | orchestrator | rtt min/avg/max/mdev = 2.063/4.185/8.290/2.902 ms 2025-07-06 20:54:21.406611 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:54:21.406643 | orchestrator | + ping -c3 192.168.112.187 2025-07-06 20:54:21.418328 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2025-07-06 20:54:21.418419 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=6.70 ms 2025-07-06 20:54:22.416494 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.65 ms 2025-07-06 20:54:23.418112 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=2.34 ms 2025-07-06 20:54:23.418218 | orchestrator | 2025-07-06 20:54:23.418235 | orchestrator | --- 192.168.112.187 ping statistics --- 2025-07-06 20:54:23.418249 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-06 20:54:23.418261 | orchestrator | rtt min/avg/max/mdev = 2.341/3.895/6.700/1.987 ms 2025-07-06 20:54:23.418499 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:54:23.418522 | orchestrator | + ping -c3 192.168.112.125 2025-07-06 20:54:23.430147 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-07-06 20:54:23.430212 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=6.68 ms 2025-07-06 20:54:24.428029 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.21 ms 2025-07-06 20:54:25.427774 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=1.63 ms 2025-07-06 20:54:25.428668 | orchestrator | 2025-07-06 20:54:25.428703 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-07-06 20:54:25.428711 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:54:25.428719 | orchestrator | rtt min/avg/max/mdev = 1.630/3.507/6.683/2.258 ms 2025-07-06 20:54:25.428738 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-07-06 20:54:28.704760 | orchestrator | 2025-07-06 20:54:28 | INFO  | Live migrating server 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 2025-07-06 20:54:39.935700 | orchestrator | 2025-07-06 20:54:39 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:54:42.257690 | orchestrator | 2025-07-06 20:54:42 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:54:44.633523 | orchestrator | 2025-07-06 20:54:44 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:54:47.115487 | orchestrator | 2025-07-06 20:54:47 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:54:49.379820 | orchestrator | 2025-07-06 20:54:49 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:54:51.766578 | orchestrator | 2025-07-06 20:54:51 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) is still in progress 2025-07-06 20:54:54.078985 | orchestrator | 2025-07-06 20:54:54 | INFO  | Live migration of 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 (test-4) completed with status ACTIVE 2025-07-06 20:54:54.079095 | orchestrator | 2025-07-06 20:54:54 | INFO  | Live migrating server 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 2025-07-06 20:55:04.841787 | orchestrator | 2025-07-06 20:55:04 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:55:07.145768 | orchestrator | 2025-07-06 20:55:07 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:55:09.539131 | orchestrator | 2025-07-06 20:55:09 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:55:11.870073 | orchestrator | 2025-07-06 20:55:11 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:55:14.202747 | orchestrator | 2025-07-06 20:55:14 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:55:16.500583 | orchestrator | 2025-07-06 20:55:16 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) is still in progress 2025-07-06 20:55:18.765902 | orchestrator | 2025-07-06 20:55:18 | INFO  | Live migration of 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 (test-3) completed with status ACTIVE 2025-07-06 20:55:18.766100 | orchestrator | 2025-07-06 20:55:18 | INFO  | Live migrating server c87662e4-2344-4841-8f6c-78a95db51822 2025-07-06 20:55:29.056187 | orchestrator | 2025-07-06 20:55:29 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:55:31.387235 | orchestrator | 2025-07-06 20:55:31 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:55:33.705518 | orchestrator | 2025-07-06 20:55:33 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:55:36.003962 | orchestrator | 2025-07-06 20:55:36 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:55:38.251861 | orchestrator | 2025-07-06 20:55:38 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:55:40.498000 | orchestrator | 2025-07-06 20:55:40 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:55:42.799747 | orchestrator | 2025-07-06 20:55:42 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) is still in progress 2025-07-06 20:55:45.178084 | orchestrator | 2025-07-06 20:55:45 | INFO  | Live migration of c87662e4-2344-4841-8f6c-78a95db51822 (test-2) completed with status ACTIVE 2025-07-06 20:55:45.178254 | orchestrator | 2025-07-06 20:55:45 | INFO  | Live migrating server 72f77100-c5e7-43f6-994d-eb88ae94aab0 2025-07-06 20:55:55.319100 | orchestrator | 2025-07-06 20:55:55 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:55:57.623877 | orchestrator | 2025-07-06 20:55:57 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:55:59.984614 | orchestrator | 2025-07-06 20:55:59 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:56:02.273145 | orchestrator | 2025-07-06 20:56:02 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:56:04.562005 | orchestrator | 2025-07-06 20:56:04 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:56:06.815399 | orchestrator | 2025-07-06 20:56:06 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:56:09.166297 | orchestrator | 2025-07-06 20:56:09 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) is still in progress 2025-07-06 20:56:11.474853 | orchestrator | 2025-07-06 20:56:11 | INFO  | Live migration of 72f77100-c5e7-43f6-994d-eb88ae94aab0 (test-1) completed with status ACTIVE 2025-07-06 20:56:11.474961 | orchestrator | 2025-07-06 20:56:11 | INFO  | Live migrating server 811f2d63-b8d7-484b-b1ff-e4b198e2d293 2025-07-06 20:56:21.652647 | orchestrator | 2025-07-06 20:56:21 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:56:23.992387 | orchestrator | 2025-07-06 20:56:23 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:56:26.350639 | orchestrator | 2025-07-06 20:56:26 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:56:28.628787 | orchestrator | 2025-07-06 20:56:28 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:56:30.974272 | orchestrator | 2025-07-06 20:56:30 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:56:33.267196 | orchestrator | 2025-07-06 20:56:33 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:56:35.529490 | orchestrator | 2025-07-06 20:56:35 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:56:37.807457 | orchestrator | 2025-07-06 20:56:37 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) is still in progress 2025-07-06 20:56:40.109557 | orchestrator | 2025-07-06 20:56:40 | INFO  | Live migration of 811f2d63-b8d7-484b-b1ff-e4b198e2d293 (test) completed with status ACTIVE 2025-07-06 20:56:40.382527 | orchestrator | + compute_list 2025-07-06 20:56:40.382652 | orchestrator | + osism manage compute list testbed-node-3 2025-07-06 20:56:42.961101 | orchestrator | +------+--------+----------+ 2025-07-06 20:56:42.961206 | orchestrator | | ID | Name | Status | 2025-07-06 20:56:42.961222 | orchestrator | |------+--------+----------| 2025-07-06 20:56:42.961234 | orchestrator | +------+--------+----------+ 2025-07-06 20:56:43.242310 | orchestrator | + osism manage compute list testbed-node-4 2025-07-06 20:56:45.904231 | orchestrator | +------+--------+----------+ 2025-07-06 20:56:45.904420 | orchestrator | | ID | Name | Status | 2025-07-06 20:56:45.904448 | orchestrator | |------+--------+----------| 2025-07-06 20:56:45.904465 | orchestrator | +------+--------+----------+ 2025-07-06 20:56:46.171001 | orchestrator | + osism manage compute list testbed-node-5 2025-07-06 20:56:49.154542 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:56:49.154708 | orchestrator | | ID | Name | Status | 2025-07-06 20:56:49.154750 | orchestrator | |--------------------------------------+--------+----------| 2025-07-06 20:56:49.154755 | orchestrator | | 8c6b5176-fd6d-4294-a3dc-3a3a749e7512 | test-4 | ACTIVE | 2025-07-06 20:56:49.154760 | orchestrator | | 3ef21d5c-66ad-4d83-88af-086eb34f1fd9 | test-3 | ACTIVE | 2025-07-06 20:56:49.154764 | orchestrator | | c87662e4-2344-4841-8f6c-78a95db51822 | test-2 | ACTIVE | 2025-07-06 20:56:49.154769 | orchestrator | | 72f77100-c5e7-43f6-994d-eb88ae94aab0 | test-1 | ACTIVE | 2025-07-06 20:56:49.154774 | orchestrator | | 811f2d63-b8d7-484b-b1ff-e4b198e2d293 | test | ACTIVE | 2025-07-06 20:56:49.154778 | orchestrator | +--------------------------------------+--------+----------+ 2025-07-06 20:56:49.419435 | orchestrator | + server_ping 2025-07-06 20:56:49.420388 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-06 20:56:49.420533 | orchestrator | ++ tr -d '\r' 2025-07-06 20:56:52.163445 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:56:52.163595 | orchestrator | + ping -c3 192.168.112.133 2025-07-06 20:56:52.181055 | orchestrator | PING 192.168.112.133 (192.168.112.133) 56(84) bytes of data. 2025-07-06 20:56:52.181147 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=1 ttl=63 time=13.5 ms 2025-07-06 20:56:53.172059 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=2 ttl=63 time=2.76 ms 2025-07-06 20:56:54.173734 | orchestrator | 64 bytes from 192.168.112.133: icmp_seq=3 ttl=63 time=1.79 ms 2025-07-06 20:56:54.173835 | orchestrator | 2025-07-06 20:56:54.173850 | orchestrator | --- 192.168.112.133 ping statistics --- 2025-07-06 20:56:54.173863 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-06 20:56:54.173875 | orchestrator | rtt min/avg/max/mdev = 1.791/6.019/13.502/5.306 ms 2025-07-06 20:56:54.173887 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:56:54.173899 | orchestrator | + ping -c3 192.168.112.103 2025-07-06 20:56:54.185771 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-07-06 20:56:54.185806 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=7.14 ms 2025-07-06 20:56:55.181703 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.38 ms 2025-07-06 20:56:56.183288 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.94 ms 2025-07-06 20:56:56.183498 | orchestrator | 2025-07-06 20:56:56.183527 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-07-06 20:56:56.183548 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:56:56.183567 | orchestrator | rtt min/avg/max/mdev = 1.937/3.818/7.139/2.355 ms 2025-07-06 20:56:56.183606 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:56:56.183640 | orchestrator | + ping -c3 192.168.112.177 2025-07-06 20:56:56.199484 | orchestrator | PING 192.168.112.177 (192.168.112.177) 56(84) bytes of data. 2025-07-06 20:56:56.199584 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=1 ttl=63 time=9.63 ms 2025-07-06 20:56:57.194894 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=2 ttl=63 time=2.77 ms 2025-07-06 20:56:58.195743 | orchestrator | 64 bytes from 192.168.112.177: icmp_seq=3 ttl=63 time=1.68 ms 2025-07-06 20:56:58.195933 | orchestrator | 2025-07-06 20:56:58.195950 | orchestrator | --- 192.168.112.177 ping statistics --- 2025-07-06 20:56:58.195964 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-06 20:56:58.195975 | orchestrator | rtt min/avg/max/mdev = 1.681/4.694/9.630/3.518 ms 2025-07-06 20:56:58.196075 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:56:58.196092 | orchestrator | + ping -c3 192.168.112.187 2025-07-06 20:56:58.205832 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2025-07-06 20:56:58.205945 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=6.55 ms 2025-07-06 20:56:59.203665 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.21 ms 2025-07-06 20:57:00.204458 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.56 ms 2025-07-06 20:57:00.204632 | orchestrator | 2025-07-06 20:57:00.204646 | orchestrator | --- 192.168.112.187 ping statistics --- 2025-07-06 20:57:00.204658 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:57:00.204703 | orchestrator | rtt min/avg/max/mdev = 1.556/3.438/6.547/2.214 ms 2025-07-06 20:57:00.204801 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-06 20:57:00.204816 | orchestrator | + ping -c3 192.168.112.125 2025-07-06 20:57:00.217213 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-07-06 20:57:00.217363 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=6.96 ms 2025-07-06 20:57:01.214122 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=2.16 ms 2025-07-06 20:57:02.215429 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=1.68 ms 2025-07-06 20:57:02.215566 | orchestrator | 2025-07-06 20:57:02.215579 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-07-06 20:57:02.215591 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-06 20:57:02.215600 | orchestrator | rtt min/avg/max/mdev = 1.683/3.600/6.959/2.382 ms 2025-07-06 20:57:02.417228 | orchestrator | ok: Runtime: 0:18:28.446319 2025-07-06 20:57:02.462521 | 2025-07-06 20:57:02.462657 | TASK [Run tempest] 2025-07-06 20:57:02.996636 | orchestrator | skipping: Conditional result was False 2025-07-06 20:57:03.014398 | 2025-07-06 20:57:03.014578 | TASK [Check prometheus alert status] 2025-07-06 20:57:03.554516 | orchestrator | skipping: Conditional result was False 2025-07-06 20:57:03.557751 | 2025-07-06 20:57:03.557954 | PLAY RECAP 2025-07-06 20:57:03.558110 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-07-06 20:57:03.558181 | 2025-07-06 20:57:03.784579 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-06 20:57:03.786788 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-06 20:57:04.549691 | 2025-07-06 20:57:04.549866 | PLAY [Post output play] 2025-07-06 20:57:04.567217 | 2025-07-06 20:57:04.567374 | LOOP [stage-output : Register sources] 2025-07-06 20:57:04.624228 | 2025-07-06 20:57:04.624515 | TASK [stage-output : Check sudo] 2025-07-06 20:57:05.475679 | orchestrator | sudo: a password is required 2025-07-06 20:57:05.664644 | orchestrator | ok: Runtime: 0:00:00.018271 2025-07-06 20:57:05.680712 | 2025-07-06 20:57:05.680884 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-06 20:57:05.716344 | 2025-07-06 20:57:05.716591 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-06 20:57:05.793862 | orchestrator | ok 2025-07-06 20:57:05.803082 | 2025-07-06 20:57:05.803211 | LOOP [stage-output : Ensure target folders exist] 2025-07-06 20:57:06.263648 | orchestrator | ok: "docs" 2025-07-06 20:57:06.263889 | 2025-07-06 20:57:06.532034 | orchestrator | ok: "artifacts" 2025-07-06 20:57:06.796946 | orchestrator | ok: "logs" 2025-07-06 20:57:06.818732 | 2025-07-06 20:57:06.818926 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-06 20:57:06.858607 | 2025-07-06 20:57:06.858939 | TASK [stage-output : Make all log files readable] 2025-07-06 20:57:07.143580 | orchestrator | ok 2025-07-06 20:57:07.153204 | 2025-07-06 20:57:07.153403 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-06 20:57:07.187962 | orchestrator | skipping: Conditional result was False 2025-07-06 20:57:07.198877 | 2025-07-06 20:57:07.199013 | TASK [stage-output : Discover log files for compression] 2025-07-06 20:57:07.233505 | orchestrator | skipping: Conditional result was False 2025-07-06 20:57:07.248930 | 2025-07-06 20:57:07.249090 | LOOP [stage-output : Archive everything from logs] 2025-07-06 20:57:07.294185 | 2025-07-06 20:57:07.294406 | PLAY [Post cleanup play] 2025-07-06 20:57:07.305595 | 2025-07-06 20:57:07.305781 | TASK [Set cloud fact (Zuul deployment)] 2025-07-06 20:57:07.348427 | orchestrator | ok 2025-07-06 20:57:07.356753 | 2025-07-06 20:57:07.356858 | TASK [Set cloud fact (local deployment)] 2025-07-06 20:57:07.391033 | orchestrator | skipping: Conditional result was False 2025-07-06 20:57:07.407753 | 2025-07-06 20:57:07.407901 | TASK [Clean the cloud environment] 2025-07-06 20:57:08.004202 | orchestrator | 2025-07-06 20:57:08 - clean up servers 2025-07-06 20:57:08.819042 | orchestrator | 2025-07-06 20:57:08 - testbed-manager 2025-07-06 20:57:08.913339 | orchestrator | 2025-07-06 20:57:08 - testbed-node-0 2025-07-06 20:57:09.012948 | orchestrator | 2025-07-06 20:57:09 - testbed-node-5 2025-07-06 20:57:09.124628 | orchestrator | 2025-07-06 20:57:09 - testbed-node-1 2025-07-06 20:57:09.218127 | orchestrator | 2025-07-06 20:57:09 - testbed-node-2 2025-07-06 20:57:09.322390 | orchestrator | 2025-07-06 20:57:09 - testbed-node-3 2025-07-06 20:57:09.417685 | orchestrator | 2025-07-06 20:57:09 - testbed-node-4 2025-07-06 20:57:09.512882 | orchestrator | 2025-07-06 20:57:09 - clean up keypairs 2025-07-06 20:57:09.535517 | orchestrator | 2025-07-06 20:57:09 - testbed 2025-07-06 20:57:09.561983 | orchestrator | 2025-07-06 20:57:09 - wait for servers to be gone 2025-07-06 20:57:18.533585 | orchestrator | 2025-07-06 20:57:18 - clean up ports 2025-07-06 20:57:18.704793 | orchestrator | 2025-07-06 20:57:18 - 1af235d5-cff2-4638-a00c-81caa85f8f48 2025-07-06 20:57:18.954143 | orchestrator | 2025-07-06 20:57:18 - 1b15a983-45d2-4505-b26e-8bc5d5455803 2025-07-06 20:57:19.226734 | orchestrator | 2025-07-06 20:57:19 - 53b1d9ff-3d32-4691-b0b3-2d366bc85eb2 2025-07-06 20:57:19.460112 | orchestrator | 2025-07-06 20:57:19 - 83774561-1211-4569-934c-75ba01f5ff89 2025-07-06 20:57:19.683146 | orchestrator | 2025-07-06 20:57:19 - 8b1d40d3-be5f-4ee9-a914-8a48bf2275d8 2025-07-06 20:57:19.897079 | orchestrator | 2025-07-06 20:57:19 - 97a380d2-ebdb-4194-b554-8088dc8c6bce 2025-07-06 20:57:20.172283 | orchestrator | 2025-07-06 20:57:20 - b31e8764-d178-46eb-beb4-8194960cf3db 2025-07-06 20:57:20.591046 | orchestrator | 2025-07-06 20:57:20 - clean up volumes 2025-07-06 20:57:20.703672 | orchestrator | 2025-07-06 20:57:20 - testbed-volume-1-node-base 2025-07-06 20:57:20.751007 | orchestrator | 2025-07-06 20:57:20 - testbed-volume-0-node-base 2025-07-06 20:57:20.792218 | orchestrator | 2025-07-06 20:57:20 - testbed-volume-4-node-base 2025-07-06 20:57:20.832132 | orchestrator | 2025-07-06 20:57:20 - testbed-volume-2-node-base 2025-07-06 20:57:20.872751 | orchestrator | 2025-07-06 20:57:20 - testbed-volume-5-node-base 2025-07-06 20:57:20.913241 | orchestrator | 2025-07-06 20:57:20 - testbed-volume-3-node-base 2025-07-06 20:57:20.953312 | orchestrator | 2025-07-06 20:57:20 - testbed-volume-manager-base 2025-07-06 20:57:20.996207 | orchestrator | 2025-07-06 20:57:20 - testbed-volume-0-node-3 2025-07-06 20:57:21.040677 | orchestrator | 2025-07-06 20:57:21 - testbed-volume-1-node-4 2025-07-06 20:57:21.081235 | orchestrator | 2025-07-06 20:57:21 - testbed-volume-4-node-4 2025-07-06 20:57:21.122584 | orchestrator | 2025-07-06 20:57:21 - testbed-volume-2-node-5 2025-07-06 20:57:21.164796 | orchestrator | 2025-07-06 20:57:21 - testbed-volume-6-node-3 2025-07-06 20:57:21.204597 | orchestrator | 2025-07-06 20:57:21 - testbed-volume-5-node-5 2025-07-06 20:57:21.248201 | orchestrator | 2025-07-06 20:57:21 - testbed-volume-7-node-4 2025-07-06 20:57:21.290309 | orchestrator | 2025-07-06 20:57:21 - testbed-volume-8-node-5 2025-07-06 20:57:21.333314 | orchestrator | 2025-07-06 20:57:21 - testbed-volume-3-node-3 2025-07-06 20:57:21.375519 | orchestrator | 2025-07-06 20:57:21 - disconnect routers 2025-07-06 20:57:21.475645 | orchestrator | 2025-07-06 20:57:21 - testbed 2025-07-06 20:57:22.362784 | orchestrator | 2025-07-06 20:57:22 - clean up subnets 2025-07-06 20:57:22.828337 | orchestrator | 2025-07-06 20:57:22 - subnet-testbed-management 2025-07-06 20:57:22.997211 | orchestrator | 2025-07-06 20:57:22 - clean up networks 2025-07-06 20:57:23.190508 | orchestrator | 2025-07-06 20:57:23 - net-testbed-management 2025-07-06 20:57:23.467454 | orchestrator | 2025-07-06 20:57:23 - clean up security groups 2025-07-06 20:57:23.509554 | orchestrator | 2025-07-06 20:57:23 - testbed-management 2025-07-06 20:57:23.630526 | orchestrator | 2025-07-06 20:57:23 - testbed-node 2025-07-06 20:57:23.736220 | orchestrator | 2025-07-06 20:57:23 - clean up floating ips 2025-07-06 20:57:23.774778 | orchestrator | 2025-07-06 20:57:23 - 81.163.192.163 2025-07-06 20:57:24.158988 | orchestrator | 2025-07-06 20:57:24 - clean up routers 2025-07-06 20:57:24.730281 | orchestrator | 2025-07-06 20:57:24 - testbed 2025-07-06 20:57:26.473233 | orchestrator | ok: Runtime: 0:00:18.338217 2025-07-06 20:57:26.477650 | 2025-07-06 20:57:26.477854 | PLAY RECAP 2025-07-06 20:57:26.478046 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-06 20:57:26.478157 | 2025-07-06 20:57:26.614439 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-06 20:57:26.616817 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-06 20:57:27.381337 | 2025-07-06 20:57:27.381507 | PLAY [Cleanup play] 2025-07-06 20:57:27.406241 | 2025-07-06 20:57:27.406452 | TASK [Set cloud fact (Zuul deployment)] 2025-07-06 20:57:27.497412 | orchestrator | ok 2025-07-06 20:57:27.508999 | 2025-07-06 20:57:27.509182 | TASK [Set cloud fact (local deployment)] 2025-07-06 20:57:27.534430 | orchestrator | skipping: Conditional result was False 2025-07-06 20:57:27.546071 | 2025-07-06 20:57:27.546205 | TASK [Clean the cloud environment] 2025-07-06 20:57:28.727625 | orchestrator | 2025-07-06 20:57:28 - clean up servers 2025-07-06 20:57:29.199197 | orchestrator | 2025-07-06 20:57:29 - clean up keypairs 2025-07-06 20:57:29.219042 | orchestrator | 2025-07-06 20:57:29 - wait for servers to be gone 2025-07-06 20:57:29.266340 | orchestrator | 2025-07-06 20:57:29 - clean up ports 2025-07-06 20:57:29.354630 | orchestrator | 2025-07-06 20:57:29 - clean up volumes 2025-07-06 20:57:29.431569 | orchestrator | 2025-07-06 20:57:29 - disconnect routers 2025-07-06 20:57:29.461602 | orchestrator | 2025-07-06 20:57:29 - clean up subnets 2025-07-06 20:57:29.481624 | orchestrator | 2025-07-06 20:57:29 - clean up networks 2025-07-06 20:57:29.635646 | orchestrator | 2025-07-06 20:57:29 - clean up security groups 2025-07-06 20:57:29.670424 | orchestrator | 2025-07-06 20:57:29 - clean up floating ips 2025-07-06 20:57:29.697788 | orchestrator | 2025-07-06 20:57:29 - clean up routers 2025-07-06 20:57:30.085766 | orchestrator | ok: Runtime: 0:00:01.404474 2025-07-06 20:57:30.089815 | 2025-07-06 20:57:30.090091 | PLAY RECAP 2025-07-06 20:57:30.090227 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-06 20:57:30.090325 | 2025-07-06 20:57:30.213969 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-06 20:57:30.216082 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-06 20:57:30.974690 | 2025-07-06 20:57:30.974883 | PLAY [Base post-fetch] 2025-07-06 20:57:30.990390 | 2025-07-06 20:57:30.990519 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-06 20:57:31.035525 | orchestrator | skipping: Conditional result was False 2025-07-06 20:57:31.042343 | 2025-07-06 20:57:31.042489 | TASK [fetch-output : Set log path for single node] 2025-07-06 20:57:31.083831 | orchestrator | ok 2025-07-06 20:57:31.089950 | 2025-07-06 20:57:31.090064 | LOOP [fetch-output : Ensure local output dirs] 2025-07-06 20:57:31.568172 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/dac217f43f7d42b29ccad2ebb7bfad75/work/logs" 2025-07-06 20:57:31.830984 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dac217f43f7d42b29ccad2ebb7bfad75/work/artifacts" 2025-07-06 20:57:32.114130 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/dac217f43f7d42b29ccad2ebb7bfad75/work/docs" 2025-07-06 20:57:32.140018 | 2025-07-06 20:57:32.140311 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-06 20:57:33.031131 | orchestrator | changed: .d..t...... ./ 2025-07-06 20:57:33.031468 | orchestrator | changed: All items complete 2025-07-06 20:57:33.031519 | 2025-07-06 20:57:33.751944 | orchestrator | changed: .d..t...... ./ 2025-07-06 20:57:34.488708 | orchestrator | changed: .d..t...... ./ 2025-07-06 20:57:34.514440 | 2025-07-06 20:57:34.514588 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-06 20:57:34.538851 | orchestrator | skipping: Conditional result was False 2025-07-06 20:57:34.542701 | orchestrator | skipping: Conditional result was False 2025-07-06 20:57:34.568905 | 2025-07-06 20:57:34.569010 | PLAY RECAP 2025-07-06 20:57:34.569114 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-07-06 20:57:34.569254 | 2025-07-06 20:57:34.705701 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-06 20:57:34.706725 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-06 20:57:35.473479 | 2025-07-06 20:57:35.473638 | PLAY [Base post] 2025-07-06 20:57:35.488628 | 2025-07-06 20:57:35.488772 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-06 20:57:36.512216 | orchestrator | changed 2025-07-06 20:57:36.519366 | 2025-07-06 20:57:36.519475 | PLAY RECAP 2025-07-06 20:57:36.519537 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-06 20:57:36.519600 | 2025-07-06 20:57:36.637492 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-06 20:57:36.638542 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-06 20:57:37.419490 | 2025-07-06 20:57:37.419672 | PLAY [Base post-logs] 2025-07-06 20:57:37.430872 | 2025-07-06 20:57:37.431034 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-06 20:57:37.912960 | localhost | changed 2025-07-06 20:57:37.923198 | 2025-07-06 20:57:37.923379 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-06 20:57:37.948800 | localhost | ok 2025-07-06 20:57:37.952027 | 2025-07-06 20:57:37.952129 | TASK [Set zuul-log-path fact] 2025-07-06 20:57:37.966801 | localhost | ok 2025-07-06 20:57:37.975781 | 2025-07-06 20:57:37.975891 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-06 20:57:38.000547 | localhost | ok 2025-07-06 20:57:38.003584 | 2025-07-06 20:57:38.003690 | TASK [upload-logs : Create log directories] 2025-07-06 20:57:38.497219 | localhost | changed 2025-07-06 20:57:38.500375 | 2025-07-06 20:57:38.500489 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-06 20:57:39.022386 | localhost -> localhost | ok: Runtime: 0:00:00.004587 2025-07-06 20:57:39.030385 | 2025-07-06 20:57:39.030567 | TASK [upload-logs : Upload logs to log server] 2025-07-06 20:57:39.604926 | localhost | Output suppressed because no_log was given 2025-07-06 20:57:39.608755 | 2025-07-06 20:57:39.608877 | LOOP [upload-logs : Compress console log and json output] 2025-07-06 20:57:39.661233 | localhost | skipping: Conditional result was False 2025-07-06 20:57:39.665544 | localhost | skipping: Conditional result was False 2025-07-06 20:57:39.679825 | 2025-07-06 20:57:39.680108 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-06 20:57:39.727935 | localhost | skipping: Conditional result was False 2025-07-06 20:57:39.728747 | 2025-07-06 20:57:39.731464 | localhost | skipping: Conditional result was False 2025-07-06 20:57:39.741611 | 2025-07-06 20:57:39.741725 | LOOP [upload-logs : Upload console log and json output]