2025-06-19 09:41:17.889778 | Job console starting 2025-06-19 09:41:17.904255 | Updating git repos 2025-06-19 09:41:17.973993 | Cloning repos into workspace 2025-06-19 09:41:18.171991 | Restoring repo states 2025-06-19 09:41:18.204218 | Merging changes 2025-06-19 09:41:18.204246 | Checking out repos 2025-06-19 09:41:18.463641 | Preparing playbooks 2025-06-19 09:41:19.225340 | Running Ansible setup 2025-06-19 09:41:23.535156 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-19 09:41:24.343123 | 2025-06-19 09:41:24.343289 | PLAY [Base pre] 2025-06-19 09:41:24.360996 | 2025-06-19 09:41:24.361151 | TASK [Setup log path fact] 2025-06-19 09:41:24.391717 | orchestrator | ok 2025-06-19 09:41:24.410020 | 2025-06-19 09:41:24.410199 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-19 09:41:24.450578 | orchestrator | ok 2025-06-19 09:41:24.462700 | 2025-06-19 09:41:24.462856 | TASK [emit-job-header : Print job information] 2025-06-19 09:41:24.503333 | # Job Information 2025-06-19 09:41:24.503516 | Ansible Version: 2.16.14 2025-06-19 09:41:24.503552 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-06-19 09:41:24.503585 | Pipeline: post 2025-06-19 09:41:24.503608 | Executor: 521e9411259a 2025-06-19 09:41:24.503628 | Triggered by: https://github.com/osism/testbed/commit/8c6f86999b71320add3ced687c09e44451a2776e 2025-06-19 09:41:24.503650 | Event ID: 8806177a-4cf1-11f0-9023-9603392940ac 2025-06-19 09:41:24.511298 | 2025-06-19 09:41:24.511438 | LOOP [emit-job-header : Print node information] 2025-06-19 09:41:24.629670 | orchestrator | ok: 2025-06-19 09:41:24.629891 | orchestrator | # Node Information 2025-06-19 09:41:24.629927 | orchestrator | Inventory Hostname: orchestrator 2025-06-19 09:41:24.629952 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-19 09:41:24.629973 | orchestrator | Username: zuul-testbed06 2025-06-19 09:41:24.629994 | orchestrator | Distro: Debian 12.11 2025-06-19 09:41:24.630017 | orchestrator | Provider: static-testbed 2025-06-19 09:41:24.630038 | orchestrator | Region: 2025-06-19 09:41:24.630059 | orchestrator | Label: testbed-orchestrator 2025-06-19 09:41:24.630079 | orchestrator | Product Name: OpenStack Nova 2025-06-19 09:41:24.630099 | orchestrator | Interface IP: 81.163.193.140 2025-06-19 09:41:24.644350 | 2025-06-19 09:41:24.644486 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-19 09:41:25.128828 | orchestrator -> localhost | changed 2025-06-19 09:41:25.141001 | 2025-06-19 09:41:25.141355 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-19 09:41:26.288083 | orchestrator -> localhost | changed 2025-06-19 09:41:26.303455 | 2025-06-19 09:41:26.303601 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-19 09:41:26.619021 | orchestrator -> localhost | ok 2025-06-19 09:41:26.627198 | 2025-06-19 09:41:26.627364 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-19 09:41:26.658043 | orchestrator | ok 2025-06-19 09:41:26.675505 | orchestrator | included: /var/lib/zuul/builds/ec99971b166f4fa8be6dbdcce14b0b3d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-19 09:41:26.684205 | 2025-06-19 09:41:26.684382 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-19 09:41:28.416747 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-19 09:41:28.417161 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/ec99971b166f4fa8be6dbdcce14b0b3d/work/ec99971b166f4fa8be6dbdcce14b0b3d_id_rsa 2025-06-19 09:41:28.417231 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/ec99971b166f4fa8be6dbdcce14b0b3d/work/ec99971b166f4fa8be6dbdcce14b0b3d_id_rsa.pub 2025-06-19 09:41:28.417379 | orchestrator -> localhost | The key fingerprint is: 2025-06-19 09:41:28.417447 | orchestrator -> localhost | SHA256:sePnNevEMqA0dDTnIwYJotL2JVIuTTCsPVs1dQ7Josc zuul-build-sshkey 2025-06-19 09:41:28.417491 | orchestrator -> localhost | The key's randomart image is: 2025-06-19 09:41:28.417738 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-19 09:41:28.417805 | orchestrator -> localhost | | .+.+..+=.o | 2025-06-19 09:41:28.417845 | orchestrator -> localhost | | o.B =ooB | 2025-06-19 09:41:28.417881 | orchestrator -> localhost | |oo= ++oo= + | 2025-06-19 09:41:28.417915 | orchestrator -> localhost | |o.o+o+Eo + . | 2025-06-19 09:41:28.417948 | orchestrator -> localhost | | +..o S | 2025-06-19 09:41:28.417993 | orchestrator -> localhost | | . . + o . | 2025-06-19 09:41:28.418028 | orchestrator -> localhost | | . . + = | 2025-06-19 09:41:28.418117 | orchestrator -> localhost | | o = o | 2025-06-19 09:41:28.418162 | orchestrator -> localhost | | ..o | 2025-06-19 09:41:28.418378 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-19 09:41:28.418482 | orchestrator -> localhost | ok: Runtime: 0:00:01.221241 2025-06-19 09:41:28.431894 | 2025-06-19 09:41:28.432185 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-19 09:41:28.464608 | orchestrator | ok 2025-06-19 09:41:28.478648 | orchestrator | included: /var/lib/zuul/builds/ec99971b166f4fa8be6dbdcce14b0b3d/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-19 09:41:28.490398 | 2025-06-19 09:41:28.490525 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-19 09:41:28.517809 | orchestrator | skipping: Conditional result was False 2025-06-19 09:41:28.526806 | 2025-06-19 09:41:28.526954 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-19 09:41:29.134001 | orchestrator | changed 2025-06-19 09:41:29.145674 | 2025-06-19 09:41:29.145988 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-19 09:41:29.438242 | orchestrator | ok 2025-06-19 09:41:29.445234 | 2025-06-19 09:41:29.445382 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-19 09:41:29.880561 | orchestrator | ok 2025-06-19 09:41:29.887533 | 2025-06-19 09:41:29.887717 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-19 09:41:30.287190 | orchestrator | ok 2025-06-19 09:41:30.297502 | 2025-06-19 09:41:30.297640 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-19 09:41:30.324464 | orchestrator | skipping: Conditional result was False 2025-06-19 09:41:30.340799 | 2025-06-19 09:41:30.341115 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-19 09:41:30.970278 | orchestrator -> localhost | changed 2025-06-19 09:41:30.998639 | 2025-06-19 09:41:30.998925 | TASK [add-build-sshkey : Add back temp key] 2025-06-19 09:41:31.400671 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/ec99971b166f4fa8be6dbdcce14b0b3d/work/ec99971b166f4fa8be6dbdcce14b0b3d_id_rsa (zuul-build-sshkey) 2025-06-19 09:41:31.401033 | orchestrator -> localhost | ok: Runtime: 0:00:00.027258 2025-06-19 09:41:31.408629 | 2025-06-19 09:41:31.408760 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-19 09:41:31.867605 | orchestrator | ok 2025-06-19 09:41:31.876281 | 2025-06-19 09:41:31.876437 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-19 09:41:31.902187 | orchestrator | skipping: Conditional result was False 2025-06-19 09:41:31.984585 | 2025-06-19 09:41:31.984740 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-19 09:41:32.389284 | orchestrator | ok 2025-06-19 09:41:32.403704 | 2025-06-19 09:41:32.403859 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-19 09:41:32.448381 | orchestrator | ok 2025-06-19 09:41:32.458151 | 2025-06-19 09:41:32.458280 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-19 09:41:32.727992 | orchestrator -> localhost | ok 2025-06-19 09:41:32.737192 | 2025-06-19 09:41:32.737296 | TASK [validate-host : Collect information about the host] 2025-06-19 09:41:34.932443 | orchestrator | ok 2025-06-19 09:41:34.950215 | 2025-06-19 09:41:34.950422 | TASK [validate-host : Sanitize hostname] 2025-06-19 09:41:35.021837 | orchestrator | ok 2025-06-19 09:41:35.029947 | 2025-06-19 09:41:35.030123 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-19 09:41:35.571161 | orchestrator -> localhost | changed 2025-06-19 09:41:35.577469 | 2025-06-19 09:41:35.577565 | TASK [validate-host : Collect information about zuul worker] 2025-06-19 09:41:36.044245 | orchestrator | ok 2025-06-19 09:41:36.052145 | 2025-06-19 09:41:36.052351 | TASK [validate-host : Write out all zuul information for each host] 2025-06-19 09:41:36.591964 | orchestrator -> localhost | changed 2025-06-19 09:41:36.614744 | 2025-06-19 09:41:36.614885 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-19 09:41:36.910272 | orchestrator | ok 2025-06-19 09:41:36.918528 | 2025-06-19 09:41:36.918706 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-19 09:42:16.604549 | orchestrator | changed: 2025-06-19 09:42:16.607197 | orchestrator | .d..t...... src/ 2025-06-19 09:42:16.607265 | orchestrator | .d..t...... src/github.com/ 2025-06-19 09:42:16.607293 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-19 09:42:16.607327 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-19 09:42:16.607348 | orchestrator | RedHat.yml 2025-06-19 09:42:16.622605 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-19 09:42:16.622623 | orchestrator | RedHat.yml 2025-06-19 09:42:16.622677 | orchestrator | = 2.2.0"... 2025-06-19 09:42:31.448004 | orchestrator | 09:42:31.447 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-19 09:42:31.528068 | orchestrator | 09:42:31.527 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-06-19 09:42:32.570658 | orchestrator | 09:42:32.570 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-19 09:42:33.433272 | orchestrator | 09:42:33.433 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-19 09:42:34.400933 | orchestrator | 09:42:34.400 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-19 09:42:35.260132 | orchestrator | 09:42:35.259 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-19 09:42:36.198483 | orchestrator | 09:42:36.198 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.2.0... 2025-06-19 09:42:37.244051 | orchestrator | 09:42:37.243 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.2.0 (signed, key ID 4F80527A391BEFD2) 2025-06-19 09:42:37.244125 | orchestrator | 09:42:37.243 STDOUT terraform: Providers are signed by their developers. 2025-06-19 09:42:37.244134 | orchestrator | 09:42:37.244 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-19 09:42:37.244140 | orchestrator | 09:42:37.244 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-19 09:42:37.244146 | orchestrator | 09:42:37.244 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-19 09:42:37.244308 | orchestrator | 09:42:37.244 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-19 09:42:37.244318 | orchestrator | 09:42:37.244 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-19 09:42:37.244323 | orchestrator | 09:42:37.244 STDOUT terraform: you run "tofu init" in the future. 2025-06-19 09:42:37.244458 | orchestrator | 09:42:37.244 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-19 09:42:37.244509 | orchestrator | 09:42:37.244 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-19 09:42:37.244542 | orchestrator | 09:42:37.244 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-19 09:42:37.244548 | orchestrator | 09:42:37.244 STDOUT terraform: should now work. 2025-06-19 09:42:37.244597 | orchestrator | 09:42:37.244 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-19 09:42:37.244653 | orchestrator | 09:42:37.244 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-19 09:42:37.244724 | orchestrator | 09:42:37.244 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-19 09:42:37.575266 | orchestrator | 09:42:37.575 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-19 09:42:37.575346 | orchestrator | 09:42:37.575 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-06-19 09:42:39.151245 | orchestrator | 09:42:39.151 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-19 09:42:39.151324 | orchestrator | 09:42:39.151 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-19 09:42:39.151443 | orchestrator | 09:42:39.151 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-19 09:42:39.151488 | orchestrator | 09:42:39.151 STDOUT terraform: for this configuration. 2025-06-19 09:42:39.358067 | orchestrator | 09:42:39.356 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-19 09:42:39.358133 | orchestrator | 09:42:39.356 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-06-19 09:42:39.444120 | orchestrator | 09:42:39.443 STDOUT terraform: ci.auto.tfvars 2025-06-19 09:42:39.787728 | orchestrator | 09:42:39.787 STDOUT terraform: default_custom.tf 2025-06-19 09:42:40.149099 | orchestrator | 09:42:40.148 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-19 09:42:41.145705 | orchestrator | 09:42:41.145 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-19 09:42:41.672302 | orchestrator | 09:42:41.670 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-19 09:42:41.862227 | orchestrator | 09:42:41.862 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-19 09:42:41.862293 | orchestrator | 09:42:41.862 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-19 09:42:41.862300 | orchestrator | 09:42:41.862 STDOUT terraform:  + create 2025-06-19 09:42:41.862305 | orchestrator | 09:42:41.862 STDOUT terraform:  <= read (data resources) 2025-06-19 09:42:41.862311 | orchestrator | 09:42:41.862 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-19 09:42:41.862336 | orchestrator | 09:42:41.862 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-19 09:42:41.862381 | orchestrator | 09:42:41.862 STDOUT terraform:  # (config refers to values not yet known) 2025-06-19 09:42:41.862417 | orchestrator | 09:42:41.862 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-19 09:42:41.862464 | orchestrator | 09:42:41.862 STDOUT terraform:  + checksum = (known after apply) 2025-06-19 09:42:41.862502 | orchestrator | 09:42:41.862 STDOUT terraform:  + created_at = (known after apply) 2025-06-19 09:42:41.862543 | orchestrator | 09:42:41.862 STDOUT terraform:  + file = (known after apply) 2025-06-19 09:42:41.862584 | orchestrator | 09:42:41.862 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.862625 | orchestrator | 09:42:41.862 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.862673 | orchestrator | 09:42:41.862 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-19 09:42:41.862716 | orchestrator | 09:42:41.862 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-19 09:42:41.862744 | orchestrator | 09:42:41.862 STDOUT terraform:  + most_recent = true 2025-06-19 09:42:41.862785 | orchestrator | 09:42:41.862 STDOUT terraform:  + name = (known after apply) 2025-06-19 09:42:41.862845 | orchestrator | 09:42:41.862 STDOUT terraform:  + protected = (known after apply) 2025-06-19 09:42:41.862888 | orchestrator | 09:42:41.862 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.862929 | orchestrator | 09:42:41.862 STDOUT terraform:  + schema = (known after apply) 2025-06-19 09:42:41.862970 | orchestrator | 09:42:41.862 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-19 09:42:41.863008 | orchestrator | 09:42:41.862 STDOUT terraform:  + tags = (known after apply) 2025-06-19 09:42:41.863049 | orchestrator | 09:42:41.862 STDOUT terraform:  + updated_at = (known after apply) 2025-06-19 09:42:41.863055 | orchestrator | 09:42:41.863 STDOUT terraform:  } 2025-06-19 09:42:41.863170 | orchestrator | 09:42:41.863 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-19 09:42:41.863201 | orchestrator | 09:42:41.863 STDOUT terraform:  # (config refers to values not yet known) 2025-06-19 09:42:41.863250 | orchestrator | 09:42:41.863 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-19 09:42:41.863289 | orchestrator | 09:42:41.863 STDOUT terraform:  + checksum = (known after apply) 2025-06-19 09:42:41.863328 | orchestrator | 09:42:41.863 STDOUT terraform:  + created_at = (known after apply) 2025-06-19 09:42:41.863367 | orchestrator | 09:42:41.863 STDOUT terraform:  + file = (known after apply) 2025-06-19 09:42:41.863411 | orchestrator | 09:42:41.863 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.863451 | orchestrator | 09:42:41.863 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.863490 | orchestrator | 09:42:41.863 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-19 09:42:41.863567 | orchestrator | 09:42:41.863 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-19 09:42:41.863580 | orchestrator | 09:42:41.863 STDOUT terraform:  + most_recent = true 2025-06-19 09:42:41.863586 | orchestrator | 09:42:41.863 STDOUT terraform:  + name = (known after apply) 2025-06-19 09:42:41.863623 | orchestrator | 09:42:41.863 STDOUT terraform:  + protected = (known after apply) 2025-06-19 09:42:41.863662 | orchestrator | 09:42:41.863 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.863700 | orchestrator | 09:42:41.863 STDOUT terraform:  + schema = (known after apply) 2025-06-19 09:42:41.863739 | orchestrator | 09:42:41.863 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-19 09:42:41.863778 | orchestrator | 09:42:41.863 STDOUT terraform:  + tags = (known after apply) 2025-06-19 09:42:41.863830 | orchestrator | 09:42:41.863 STDOUT terraform:  + updated_at = (known after apply) 2025-06-19 09:42:41.863837 | orchestrator | 09:42:41.863 STDOUT terraform:  } 2025-06-19 09:42:41.863887 | orchestrator | 09:42:41.863 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-19 09:42:41.863928 | orchestrator | 09:42:41.863 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-19 09:42:41.863976 | orchestrator | 09:42:41.863 STDOUT terraform:  + content = (known after apply) 2025-06-19 09:42:41.864024 | orchestrator | 09:42:41.863 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-19 09:42:41.864075 | orchestrator | 09:42:41.864 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-19 09:42:41.864123 | orchestrator | 09:42:41.864 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-19 09:42:41.864170 | orchestrator | 09:42:41.864 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-19 09:42:41.864218 | orchestrator | 09:42:41.864 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-19 09:42:41.864266 | orchestrator | 09:42:41.864 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-19 09:42:41.864315 | orchestrator | 09:42:41.864 STDOUT terraform:  + directory_permission = "0777" 2025-06-19 09:42:41.864369 | orchestrator | 09:42:41.864 STDOUT terraform:  + file_permission = "0644" 2025-06-19 09:42:41.864460 | orchestrator | 09:42:41.864 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-19 09:42:41.864547 | orchestrator | 09:42:41.864 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.864580 | orchestrator | 09:42:41.864 STDOUT terraform:  } 2025-06-19 09:42:41.864644 | orchestrator | 09:42:41.864 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-19 09:42:41.864721 | orchestrator | 09:42:41.864 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-19 09:42:41.864865 | orchestrator | 09:42:41.864 STDOUT terraform:  + content = (known after apply) 2025-06-19 09:42:41.864928 | orchestrator | 09:42:41.864 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-19 09:42:41.865018 | orchestrator | 09:42:41.864 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-19 09:42:41.865106 | orchestrator | 09:42:41.865 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-19 09:42:41.865200 | orchestrator | 09:42:41.865 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-19 09:42:41.865285 | orchestrator | 09:42:41.865 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-19 09:42:41.865363 | orchestrator | 09:42:41.865 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-19 09:42:41.865403 | orchestrator | 09:42:41.865 STDOUT terraform:  + directory_permission = "0777" 2025-06-19 09:42:41.865437 | orchestrator | 09:42:41.865 STDOUT terraform:  + file_permission = "0644" 2025-06-19 09:42:41.865481 | orchestrator | 09:42:41.865 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-19 09:42:41.865534 | orchestrator | 09:42:41.865 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.865540 | orchestrator | 09:42:41.865 STDOUT terraform:  } 2025-06-19 09:42:41.865584 | orchestrator | 09:42:41.865 STDOUT terraform:  # local_file.inventory will be created 2025-06-19 09:42:41.865657 | orchestrator | 09:42:41.865 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-19 09:42:41.865709 | orchestrator | 09:42:41.865 STDOUT terraform:  + content = (known after apply) 2025-06-19 09:42:41.865758 | orchestrator | 09:42:41.865 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-19 09:42:41.865846 | orchestrator | 09:42:41.865 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-19 09:42:41.865905 | orchestrator | 09:42:41.865 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-19 09:42:41.865954 | orchestrator | 09:42:41.865 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-19 09:42:41.866003 | orchestrator | 09:42:41.865 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-19 09:42:41.866079 | orchestrator | 09:42:41.865 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-19 09:42:41.866114 | orchestrator | 09:42:41.866 STDOUT terraform:  + directory_permission = "0777" 2025-06-19 09:42:41.866148 | orchestrator | 09:42:41.866 STDOUT terraform:  + file_permission = "0644" 2025-06-19 09:42:41.866189 | orchestrator | 09:42:41.866 STDOUT terraform:  + filename = "inventory.ci" 2025-06-19 09:42:41.866240 | orchestrator | 09:42:41.866 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.866246 | orchestrator | 09:42:41.866 STDOUT terraform:  } 2025-06-19 09:42:41.866338 | orchestrator | 09:42:41.866 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-19 09:42:41.866380 | orchestrator | 09:42:41.866 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-19 09:42:41.866421 | orchestrator | 09:42:41.866 STDOUT terraform:  + content = (sensitive value) 2025-06-19 09:42:41.866466 | orchestrator | 09:42:41.866 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-19 09:42:41.866518 | orchestrator | 09:42:41.866 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-19 09:42:41.866565 | orchestrator | 09:42:41.866 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-19 09:42:41.866611 | orchestrator | 09:42:41.866 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-19 09:42:41.866656 | orchestrator | 09:42:41.866 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-19 09:42:41.866707 | orchestrator | 09:42:41.866 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-19 09:42:41.866740 | orchestrator | 09:42:41.866 STDOUT terraform:  + directory_permission = "0700" 2025-06-19 09:42:41.866772 | orchestrator | 09:42:41.866 STDOUT terraform:  + file_permission = "0600" 2025-06-19 09:42:41.866822 | orchestrator | 09:42:41.866 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-19 09:42:41.866870 | orchestrator | 09:42:41.866 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.866876 | orchestrator | 09:42:41.866 STDOUT terraform:  } 2025-06-19 09:42:41.866919 | orchestrator | 09:42:41.866 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-19 09:42:41.866959 | orchestrator | 09:42:41.866 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-19 09:42:41.866988 | orchestrator | 09:42:41.866 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.866994 | orchestrator | 09:42:41.866 STDOUT terraform:  } 2025-06-19 09:42:41.867066 | orchestrator | 09:42:41.866 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-19 09:42:41.867128 | orchestrator | 09:42:41.867 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-19 09:42:41.867172 | orchestrator | 09:42:41.867 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.867203 | orchestrator | 09:42:41.867 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.867248 | orchestrator | 09:42:41.867 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.867294 | orchestrator | 09:42:41.867 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.867341 | orchestrator | 09:42:41.867 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.867398 | orchestrator | 09:42:41.867 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-19 09:42:41.867453 | orchestrator | 09:42:41.867 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.867480 | orchestrator | 09:42:41.867 STDOUT terraform:  + size = 80 2025-06-19 09:42:41.867511 | orchestrator | 09:42:41.867 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.867542 | orchestrator | 09:42:41.867 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.867548 | orchestrator | 09:42:41.867 STDOUT terraform:  } 2025-06-19 09:42:41.867613 | orchestrator | 09:42:41.867 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-19 09:42:41.867671 | orchestrator | 09:42:41.867 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-19 09:42:41.867716 | orchestrator | 09:42:41.867 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.867747 | orchestrator | 09:42:41.867 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.867794 | orchestrator | 09:42:41.867 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.867851 | orchestrator | 09:42:41.867 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.867897 | orchestrator | 09:42:41.867 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.867956 | orchestrator | 09:42:41.867 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-19 09:42:41.868001 | orchestrator | 09:42:41.867 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.868027 | orchestrator | 09:42:41.867 STDOUT terraform:  + size = 80 2025-06-19 09:42:41.868060 | orchestrator | 09:42:41.868 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.868110 | orchestrator | 09:42:41.868 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.868138 | orchestrator | 09:42:41.868 STDOUT terraform:  } 2025-06-19 09:42:41.868210 | orchestrator | 09:42:41.868 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-19 09:42:41.868273 | orchestrator | 09:42:41.868 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-19 09:42:41.868319 | orchestrator | 09:42:41.868 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.868350 | orchestrator | 09:42:41.868 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.868398 | orchestrator | 09:42:41.868 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.868445 | orchestrator | 09:42:41.868 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.868491 | orchestrator | 09:42:41.868 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.868550 | orchestrator | 09:42:41.868 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-19 09:42:41.868597 | orchestrator | 09:42:41.868 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.868624 | orchestrator | 09:42:41.868 STDOUT terraform:  + size = 80 2025-06-19 09:42:41.868657 | orchestrator | 09:42:41.868 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.868688 | orchestrator | 09:42:41.868 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.868694 | orchestrator | 09:42:41.868 STDOUT terraform:  } 2025-06-19 09:42:41.868759 | orchestrator | 09:42:41.868 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-19 09:42:41.868827 | orchestrator | 09:42:41.868 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-19 09:42:41.868873 | orchestrator | 09:42:41.868 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.868907 | orchestrator | 09:42:41.868 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.868949 | orchestrator | 09:42:41.868 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.868996 | orchestrator | 09:42:41.868 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.869041 | orchestrator | 09:42:41.868 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.869098 | orchestrator | 09:42:41.869 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-19 09:42:41.869143 | orchestrator | 09:42:41.869 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.869170 | orchestrator | 09:42:41.869 STDOUT terraform:  + size = 80 2025-06-19 09:42:41.869201 | orchestrator | 09:42:41.869 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.869232 | orchestrator | 09:42:41.869 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.869238 | orchestrator | 09:42:41.869 STDOUT terraform:  } 2025-06-19 09:42:41.869305 | orchestrator | 09:42:41.869 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-19 09:42:41.869364 | orchestrator | 09:42:41.869 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-19 09:42:41.869410 | orchestrator | 09:42:41.869 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.869441 | orchestrator | 09:42:41.869 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.869487 | orchestrator | 09:42:41.869 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.869532 | orchestrator | 09:42:41.869 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.869577 | orchestrator | 09:42:41.869 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.869635 | orchestrator | 09:42:41.869 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-19 09:42:41.869681 | orchestrator | 09:42:41.869 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.869708 | orchestrator | 09:42:41.869 STDOUT terraform:  + size = 80 2025-06-19 09:42:41.869738 | orchestrator | 09:42:41.869 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.869769 | orchestrator | 09:42:41.869 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.869776 | orchestrator | 09:42:41.869 STDOUT terraform:  } 2025-06-19 09:42:41.869872 | orchestrator | 09:42:41.869 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-19 09:42:41.869937 | orchestrator | 09:42:41.869 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-19 09:42:41.869979 | orchestrator | 09:42:41.869 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.870025 | orchestrator | 09:42:41.869 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.870082 | orchestrator | 09:42:41.870 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.870132 | orchestrator | 09:42:41.870 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.870175 | orchestrator | 09:42:41.870 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.870232 | orchestrator | 09:42:41.870 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-19 09:42:41.870276 | orchestrator | 09:42:41.870 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.870296 | orchestrator | 09:42:41.870 STDOUT terraform:  + size = 80 2025-06-19 09:42:41.870326 | orchestrator | 09:42:41.870 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.870355 | orchestrator | 09:42:41.870 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.870361 | orchestrator | 09:42:41.870 STDOUT terraform:  } 2025-06-19 09:42:41.870423 | orchestrator | 09:42:41.870 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-19 09:42:41.870477 | orchestrator | 09:42:41.870 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-19 09:42:41.870520 | orchestrator | 09:42:41.870 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.870553 | orchestrator | 09:42:41.870 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.870598 | orchestrator | 09:42:41.870 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.870642 | orchestrator | 09:42:41.870 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.870683 | orchestrator | 09:42:41.870 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.870737 | orchestrator | 09:42:41.870 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-19 09:42:41.870781 | orchestrator | 09:42:41.870 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.870829 | orchestrator | 09:42:41.870 STDOUT terraform:  + size = 80 2025-06-19 09:42:41.870840 | orchestrator | 09:42:41.870 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.870868 | orchestrator | 09:42:41.870 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.870875 | orchestrator | 09:42:41.870 STDOUT terraform:  } 2025-06-19 09:42:41.870936 | orchestrator | 09:42:41.870 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-19 09:42:41.870989 | orchestrator | 09:42:41.870 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-19 09:42:41.871036 | orchestrator | 09:42:41.870 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.871057 | orchestrator | 09:42:41.871 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.871101 | orchestrator | 09:42:41.871 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.871143 | orchestrator | 09:42:41.871 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.871189 | orchestrator | 09:42:41.871 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-19 09:42:41.871232 | orchestrator | 09:42:41.871 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.871258 | orchestrator | 09:42:41.871 STDOUT terraform:  + size = 20 2025-06-19 09:42:41.871286 | orchestrator | 09:42:41.871 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.871317 | orchestrator | 09:42:41.871 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.871323 | orchestrator | 09:42:41.871 STDOUT terraform:  } 2025-06-19 09:42:41.871382 | orchestrator | 09:42:41.871 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-19 09:42:41.871434 | orchestrator | 09:42:41.871 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-19 09:42:41.871476 | orchestrator | 09:42:41.871 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.871505 | orchestrator | 09:42:41.871 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.871550 | orchestrator | 09:42:41.871 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.871593 | orchestrator | 09:42:41.871 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.871640 | orchestrator | 09:42:41.871 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-19 09:42:41.871683 | orchestrator | 09:42:41.871 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.871709 | orchestrator | 09:42:41.871 STDOUT terraform:  + size = 20 2025-06-19 09:42:41.871741 | orchestrator | 09:42:41.871 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.871770 | orchestrator | 09:42:41.871 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.871776 | orchestrator | 09:42:41.871 STDOUT terraform:  } 2025-06-19 09:42:41.871845 | orchestrator | 09:42:41.871 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-19 09:42:41.871897 | orchestrator | 09:42:41.871 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-19 09:42:41.871939 | orchestrator | 09:42:41.871 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.871967 | orchestrator | 09:42:41.871 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.872008 | orchestrator | 09:42:41.871 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.872052 | orchestrator | 09:42:41.871 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.872099 | orchestrator | 09:42:41.872 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-19 09:42:41.872141 | orchestrator | 09:42:41.872 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.872167 | orchestrator | 09:42:41.872 STDOUT terraform:  + size = 20 2025-06-19 09:42:41.872199 | orchestrator | 09:42:41.872 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.872225 | orchestrator | 09:42:41.872 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.872232 | orchestrator | 09:42:41.872 STDOUT terraform:  } 2025-06-19 09:42:41.872289 | orchestrator | 09:42:41.872 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-19 09:42:41.872341 | orchestrator | 09:42:41.872 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-19 09:42:41.872384 | orchestrator | 09:42:41.872 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.872412 | orchestrator | 09:42:41.872 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.872456 | orchestrator | 09:42:41.872 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.872499 | orchestrator | 09:42:41.872 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.872546 | orchestrator | 09:42:41.872 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-19 09:42:41.872591 | orchestrator | 09:42:41.872 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.872617 | orchestrator | 09:42:41.872 STDOUT terraform:  + size = 20 2025-06-19 09:42:41.872646 | orchestrator | 09:42:41.872 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.872688 | orchestrator | 09:42:41.872 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.872708 | orchestrator | 09:42:41.872 STDOUT terraform:  } 2025-06-19 09:42:41.872795 | orchestrator | 09:42:41.872 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-19 09:42:41.872878 | orchestrator | 09:42:41.872 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-19 09:42:41.872920 | orchestrator | 09:42:41.872 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.872951 | orchestrator | 09:42:41.872 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.872996 | orchestrator | 09:42:41.872 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.873039 | orchestrator | 09:42:41.872 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.873086 | orchestrator | 09:42:41.873 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-19 09:42:41.873130 | orchestrator | 09:42:41.873 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.873154 | orchestrator | 09:42:41.873 STDOUT terraform:  + size = 20 2025-06-19 09:42:41.873183 | orchestrator | 09:42:41.873 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.873211 | orchestrator | 09:42:41.873 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.873218 | orchestrator | 09:42:41.873 STDOUT terraform:  } 2025-06-19 09:42:41.873287 | orchestrator | 09:42:41.873 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-19 09:42:41.873367 | orchestrator | 09:42:41.873 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-19 09:42:41.873432 | orchestrator | 09:42:41.873 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.873479 | orchestrator | 09:42:41.873 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.873548 | orchestrator | 09:42:41.873 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.873616 | orchestrator | 09:42:41.873 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.873695 | orchestrator | 09:42:41.873 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-19 09:42:41.873772 | orchestrator | 09:42:41.873 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.873832 | orchestrator | 09:42:41.873 STDOUT terraform:  + size = 20 2025-06-19 09:42:41.873877 | orchestrator | 09:42:41.873 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.873927 | orchestrator | 09:42:41.873 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.873953 | orchestrator | 09:42:41.873 STDOUT terraform:  } 2025-06-19 09:42:41.874073 | orchestrator | 09:42:41.873 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-19 09:42:41.874164 | orchestrator | 09:42:41.874 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-19 09:42:41.874241 | orchestrator | 09:42:41.874 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.874282 | orchestrator | 09:42:41.874 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.874326 | orchestrator | 09:42:41.874 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.874368 | orchestrator | 09:42:41.874 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.874418 | orchestrator | 09:42:41.874 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-19 09:42:41.874484 | orchestrator | 09:42:41.874 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.874528 | orchestrator | 09:42:41.874 STDOUT terraform:  + size = 20 2025-06-19 09:42:41.874569 | orchestrator | 09:42:41.874 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.874600 | orchestrator | 09:42:41.874 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.874607 | orchestrator | 09:42:41.874 STDOUT terraform:  } 2025-06-19 09:42:41.874665 | orchestrator | 09:42:41.874 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-19 09:42:41.874713 | orchestrator | 09:42:41.874 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-19 09:42:41.874753 | orchestrator | 09:42:41.874 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.874780 | orchestrator | 09:42:41.874 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.874862 | orchestrator | 09:42:41.874 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.874895 | orchestrator | 09:42:41.874 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.874939 | orchestrator | 09:42:41.874 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-19 09:42:41.874978 | orchestrator | 09:42:41.874 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.874998 | orchestrator | 09:42:41.874 STDOUT terraform:  + size = 20 2025-06-19 09:42:41.875025 | orchestrator | 09:42:41.874 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.875053 | orchestrator | 09:42:41.875 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.875060 | orchestrator | 09:42:41.875 STDOUT terraform:  } 2025-06-19 09:42:41.875111 | orchestrator | 09:42:41.875 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-19 09:42:41.875159 | orchestrator | 09:42:41.875 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-19 09:42:41.875194 | orchestrator | 09:42:41.875 STDOUT terraform:  + attachment = (known after apply) 2025-06-19 09:42:41.875214 | orchestrator | 09:42:41.875 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.879255 | orchestrator | 09:42:41.875 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.879296 | orchestrator | 09:42:41.879 STDOUT terraform:  + metadata = (known after apply) 2025-06-19 09:42:41.879327 | orchestrator | 09:42:41.879 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-19 09:42:41.879367 | orchestrator | 09:42:41.879 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.879389 | orchestrator | 09:42:41.879 STDOUT terraform:  + size = 20 2025-06-19 09:42:41.879421 | orchestrator | 09:42:41.879 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-19 09:42:41.879441 | orchestrator | 09:42:41.879 STDOUT terraform:  + volume_type = "ssd" 2025-06-19 09:42:41.879458 | orchestrator | 09:42:41.879 STDOUT terraform:  } 2025-06-19 09:42:41.879512 | orchestrator | 09:42:41.879 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-19 09:42:41.879546 | orchestrator | 09:42:41.879 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-19 09:42:41.879580 | orchestrator | 09:42:41.879 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-19 09:42:41.879620 | orchestrator | 09:42:41.879 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-19 09:42:41.879656 | orchestrator | 09:42:41.879 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-19 09:42:41.879703 | orchestrator | 09:42:41.879 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.879750 | orchestrator | 09:42:41.879 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.879767 | orchestrator | 09:42:41.879 STDOUT terraform:  + config_drive = true 2025-06-19 09:42:41.879839 | orchestrator | 09:42:41.879 STDOUT terraform:  + created = (known after apply) 2025-06-19 09:42:41.879847 | orchestrator | 09:42:41.879 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-19 09:42:41.879880 | orchestrator | 09:42:41.879 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-19 09:42:41.879901 | orchestrator | 09:42:41.879 STDOUT terraform:  + force_delete = false 2025-06-19 09:42:41.879934 | orchestrator | 09:42:41.879 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-19 09:42:41.879969 | orchestrator | 09:42:41.879 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.880012 | orchestrator | 09:42:41.879 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.880046 | orchestrator | 09:42:41.880 STDOUT terraform:  + image_name = (known after apply) 2025-06-19 09:42:41.880066 | orchestrator | 09:42:41.880 STDOUT terraform:  + key_pair = "testbed" 2025-06-19 09:42:41.880097 | orchestrator | 09:42:41.880 STDOUT terraform:  + name = "testbed-manager" 2025-06-19 09:42:41.880117 | orchestrator | 09:42:41.880 STDOUT terraform:  + power_state = "active" 2025-06-19 09:42:41.880152 | orchestrator | 09:42:41.880 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.880186 | orchestrator | 09:42:41.880 STDOUT terraform:  + security_groups = (known after apply) 2025-06-19 09:42:41.880206 | orchestrator | 09:42:41.880 STDOUT terraform:  + stop_before_destroy = false 2025-06-19 09:42:41.880242 | orchestrator | 09:42:41.880 STDOUT terraform:  + updated = (known after apply) 2025-06-19 09:42:41.880284 | orchestrator | 09:42:41.880 STDOUT terraform:  + user_data = (known after apply) 2025-06-19 09:42:41.880312 | orchestrator | 09:42:41.880 STDOUT terraform:  + block_device { 2025-06-19 09:42:41.880351 | orchestrator | 09:42:41.880 STDOUT terraform:  + boot_index = 0 2025-06-19 09:42:41.880380 | orchestrator | 09:42:41.880 STDOUT terraform:  + delete_on_termination = false 2025-06-19 09:42:41.880410 | orchestrator | 09:42:41.880 STDOUT terraform:  + destination_type = "volume" 2025-06-19 09:42:41.880438 | orchestrator | 09:42:41.880 STDOUT terraform:  + multiattach = false 2025-06-19 09:42:41.880469 | orchestrator | 09:42:41.880 STDOUT terraform:  + source_type = "volume" 2025-06-19 09:42:41.880507 | orchestrator | 09:42:41.880 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.880514 | orchestrator | 09:42:41.880 STDOUT terraform:  } 2025-06-19 09:42:41.880532 | orchestrator | 09:42:41.880 STDOUT terraform:  + network { 2025-06-19 09:42:41.880539 | orchestrator | 09:42:41.880 STDOUT terraform:  + access_network = false 2025-06-19 09:42:41.880578 | orchestrator | 09:42:41.880 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-19 09:42:41.880608 | orchestrator | 09:42:41.880 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-19 09:42:41.880640 | orchestrator | 09:42:41.880 STDOUT terraform:  + mac = (known after apply) 2025-06-19 09:42:41.880672 | orchestrator | 09:42:41.880 STDOUT terraform:  + name = (known after apply) 2025-06-19 09:42:41.880703 | orchestrator | 09:42:41.880 STDOUT terraform:  + port = (known after apply) 2025-06-19 09:42:41.880733 | orchestrator | 09:42:41.880 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.880739 | orchestrator | 09:42:41.880 STDOUT terraform:  } 2025-06-19 09:42:41.880745 | orchestrator | 09:42:41.880 STDOUT terraform:  } 2025-06-19 09:42:41.880806 | orchestrator | 09:42:41.880 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-19 09:42:41.880871 | orchestrator | 09:42:41.880 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-19 09:42:41.880907 | orchestrator | 09:42:41.880 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-19 09:42:41.880946 | orchestrator | 09:42:41.880 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-19 09:42:41.880976 | orchestrator | 09:42:41.880 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-19 09:42:41.881010 | orchestrator | 09:42:41.880 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.881036 | orchestrator | 09:42:41.881 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.881056 | orchestrator | 09:42:41.881 STDOUT terraform:  + config_drive = true 2025-06-19 09:42:41.881093 | orchestrator | 09:42:41.881 STDOUT terraform:  + created = (known after apply) 2025-06-19 09:42:41.881127 | orchestrator | 09:42:41.881 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-19 09:42:41.881156 | orchestrator | 09:42:41.881 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-19 09:42:41.881176 | orchestrator | 09:42:41.881 STDOUT terraform:  + force_delete = false 2025-06-19 09:42:41.881208 | orchestrator | 09:42:41.881 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-19 09:42:41.881242 | orchestrator | 09:42:41.881 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.881276 | orchestrator | 09:42:41.881 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.881311 | orchestrator | 09:42:41.881 STDOUT terraform:  + image_name = (known after apply) 2025-06-19 09:42:41.881338 | orchestrator | 09:42:41.881 STDOUT terraform:  + key_pair = "testbed" 2025-06-19 09:42:41.881368 | orchestrator | 09:42:41.881 STDOUT terraform:  + name = "testbed-node-0" 2025-06-19 09:42:41.881387 | orchestrator | 09:42:41.881 STDOUT terraform:  + power_state = "active" 2025-06-19 09:42:41.881421 | orchestrator | 09:42:41.881 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.881455 | orchestrator | 09:42:41.881 STDOUT terraform:  + security_groups = (known after apply) 2025-06-19 09:42:41.881475 | orchestrator | 09:42:41.881 STDOUT terraform:  + stop_before_destroy = false 2025-06-19 09:42:41.881508 | orchestrator | 09:42:41.881 STDOUT terraform:  + updated = (known after apply) 2025-06-19 09:42:41.881559 | orchestrator | 09:42:41.881 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-19 09:42:41.881565 | orchestrator | 09:42:41.881 STDOUT terraform:  + block_device { 2025-06-19 09:42:41.881594 | orchestrator | 09:42:41.881 STDOUT terraform:  + boot_index = 0 2025-06-19 09:42:41.881622 | orchestrator | 09:42:41.881 STDOUT terraform:  + delete_on_termination = false 2025-06-19 09:42:41.881650 | orchestrator | 09:42:41.881 STDOUT terraform:  + destination_type = "volume" 2025-06-19 09:42:41.881678 | orchestrator | 09:42:41.881 STDOUT terraform:  + multiattach = false 2025-06-19 09:42:41.881707 | orchestrator | 09:42:41.881 STDOUT terraform:  + source_type = "volume" 2025-06-19 09:42:41.881743 | orchestrator | 09:42:41.881 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.881749 | orchestrator | 09:42:41.881 STDOUT terraform:  } 2025-06-19 09:42:41.881755 | orchestrator | 09:42:41.881 STDOUT terraform:  + network { 2025-06-19 09:42:41.881783 | orchestrator | 09:42:41.881 STDOUT terraform:  + access_network = false 2025-06-19 09:42:41.881823 | orchestrator | 09:42:41.881 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-19 09:42:41.881854 | orchestrator | 09:42:41.881 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-19 09:42:41.881886 | orchestrator | 09:42:41.881 STDOUT terraform:  + mac = (known after apply) 2025-06-19 09:42:41.881916 | orchestrator | 09:42:41.881 STDOUT terraform:  + name = (known after apply) 2025-06-19 09:42:41.881946 | orchestrator | 09:42:41.881 STDOUT terraform:  + port = (known after apply) 2025-06-19 09:42:41.881976 | orchestrator | 09:42:41.881 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.881982 | orchestrator | 09:42:41.881 STDOUT terraform:  } 2025-06-19 09:42:41.881987 | orchestrator | 09:42:41.881 STDOUT terraform:  } 2025-06-19 09:42:41.882116 | orchestrator | 09:42:41.882 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-19 09:42:41.882156 | orchestrator | 09:42:41.882 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-19 09:42:41.882191 | orchestrator | 09:42:41.882 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-19 09:42:41.882224 | orchestrator | 09:42:41.882 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-19 09:42:41.882258 | orchestrator | 09:42:41.882 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-19 09:42:41.882291 | orchestrator | 09:42:41.882 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.882311 | orchestrator | 09:42:41.882 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.882330 | orchestrator | 09:42:41.882 STDOUT terraform:  + config_drive = true 2025-06-19 09:42:41.882363 | orchestrator | 09:42:41.882 STDOUT terraform:  + created = (known after apply) 2025-06-19 09:42:41.882396 | orchestrator | 09:42:41.882 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-19 09:42:41.882424 | orchestrator | 09:42:41.882 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-19 09:42:41.882443 | orchestrator | 09:42:41.882 STDOUT terraform:  + force_delete = false 2025-06-19 09:42:41.882479 | orchestrator | 09:42:41.882 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-19 09:42:41.882510 | orchestrator | 09:42:41.882 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.882544 | orchestrator | 09:42:41.882 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.882579 | orchestrator | 09:42:41.882 STDOUT terraform:  + image_name = (known after apply) 2025-06-19 09:42:41.882598 | orchestrator | 09:42:41.882 STDOUT terraform:  + key_pair = "testbed" 2025-06-19 09:42:41.882629 | orchestrator | 09:42:41.882 STDOUT terraform:  + name = "testbed-node-1" 2025-06-19 09:42:41.882648 | orchestrator | 09:42:41.882 STDOUT terraform:  + power_state = "active" 2025-06-19 09:42:41.882683 | orchestrator | 09:42:41.882 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.882716 | orchestrator | 09:42:41.882 STDOUT terraform:  + security_groups = (known after apply) 2025-06-19 09:42:41.882736 | orchestrator | 09:42:41.882 STDOUT terraform:  + stop_before_destroy = false 2025-06-19 09:42:41.882769 | orchestrator | 09:42:41.882 STDOUT terraform:  + updated = (known after apply) 2025-06-19 09:42:41.882860 | orchestrator | 09:42:41.882 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-19 09:42:41.882868 | orchestrator | 09:42:41.882 STDOUT terraform:  + block_device { 2025-06-19 09:42:41.882905 | orchestrator | 09:42:41.882 STDOUT terraform:  + boot_index = 0 2025-06-19 09:42:41.882948 | orchestrator | 09:42:41.882 STDOUT terraform:  + delete_on_termination = false 2025-06-19 09:42:41.882989 | orchestrator | 09:42:41.882 STDOUT terraform:  + destination_type = "volume" 2025-06-19 09:42:41.883038 | orchestrator | 09:42:41.882 STDOUT terraform:  + multiattach = false 2025-06-19 09:42:41.883093 | orchestrator | 09:42:41.883 STDOUT terraform:  + source_type = "volume" 2025-06-19 09:42:41.883160 | orchestrator | 09:42:41.883 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.883185 | orchestrator | 09:42:41.883 STDOUT terraform:  } 2025-06-19 09:42:41.883210 | orchestrator | 09:42:41.883 STDOUT terraform:  + network { 2025-06-19 09:42:41.883216 | orchestrator | 09:42:41.883 STDOUT terraform:  + access_network = false 2025-06-19 09:42:41.883257 | orchestrator | 09:42:41.883 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-19 09:42:41.883287 | orchestrator | 09:42:41.883 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-19 09:42:41.883318 | orchestrator | 09:42:41.883 STDOUT terraform:  + mac = (known after apply) 2025-06-19 09:42:41.883349 | orchestrator | 09:42:41.883 STDOUT terraform:  + name = (known after apply) 2025-06-19 09:42:41.883380 | orchestrator | 09:42:41.883 STDOUT terraform:  + port = (known after apply) 2025-06-19 09:42:41.883409 | orchestrator | 09:42:41.883 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.883416 | orchestrator | 09:42:41.883 STDOUT terraform:  } 2025-06-19 09:42:41.883421 | orchestrator | 09:42:41.883 STDOUT terraform:  } 2025-06-19 09:42:41.883471 | orchestrator | 09:42:41.883 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-19 09:42:41.883511 | orchestrator | 09:42:41.883 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-19 09:42:41.883545 | orchestrator | 09:42:41.883 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-19 09:42:41.883578 | orchestrator | 09:42:41.883 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-19 09:42:41.883612 | orchestrator | 09:42:41.883 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-19 09:42:41.883648 | orchestrator | 09:42:41.883 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.883667 | orchestrator | 09:42:41.883 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.883687 | orchestrator | 09:42:41.883 STDOUT terraform:  + config_drive = true 2025-06-19 09:42:41.883720 | orchestrator | 09:42:41.883 STDOUT terraform:  + created = (known after apply) 2025-06-19 09:42:41.883754 | orchestrator | 09:42:41.883 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-19 09:42:41.883782 | orchestrator | 09:42:41.883 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-19 09:42:41.883816 | orchestrator | 09:42:41.883 STDOUT terraform:  + force_delete = false 2025-06-19 09:42:41.883848 | orchestrator | 09:42:41.883 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-19 09:42:41.883883 | orchestrator | 09:42:41.883 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.883918 | orchestrator | 09:42:41.883 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.883951 | orchestrator | 09:42:41.883 STDOUT terraform:  + image_name = (known after apply) 2025-06-19 09:42:41.883977 | orchestrator | 09:42:41.883 STDOUT terraform:  + key_pair = "testbed" 2025-06-19 09:42:41.884007 | orchestrator | 09:42:41.883 STDOUT terraform:  + name = "testbed-node-2" 2025-06-19 09:42:41.884026 | orchestrator | 09:42:41.883 STDOUT terraform:  + power_state = "active" 2025-06-19 09:42:41.884063 | orchestrator | 09:42:41.884 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.884096 | orchestrator | 09:42:41.884 STDOUT terraform:  + security_groups = (known after apply) 2025-06-19 09:42:41.884115 | orchestrator | 09:42:41.884 STDOUT terraform:  + stop_before_destroy = false 2025-06-19 09:42:41.884151 | orchestrator | 09:42:41.884 STDOUT terraform:  + updated = (known after apply) 2025-06-19 09:42:41.884200 | orchestrator | 09:42:41.884 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-19 09:42:41.884207 | orchestrator | 09:42:41.884 STDOUT terraform:  + block_device { 2025-06-19 09:42:41.884239 | orchestrator | 09:42:41.884 STDOUT terraform:  + boot_index = 0 2025-06-19 09:42:41.884266 | orchestrator | 09:42:41.884 STDOUT terraform:  + delete_on_termination = false 2025-06-19 09:42:41.884299 | orchestrator | 09:42:41.884 STDOUT terraform:  + destination_type = "volume" 2025-06-19 09:42:41.884319 | orchestrator | 09:42:41.884 STDOUT terraform:  + multiattach = false 2025-06-19 09:42:41.884349 | orchestrator | 09:42:41.884 STDOUT terraform:  + source_type = "volume" 2025-06-19 09:42:41.884386 | orchestrator | 09:42:41.884 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.884399 | orchestrator | 09:42:41.884 STDOUT terraform:  } 2025-06-19 09:42:41.884403 | orchestrator | 09:42:41.884 STDOUT terraform:  + network { 2025-06-19 09:42:41.884428 | orchestrator | 09:42:41.884 STDOUT terraform:  + access_network = false 2025-06-19 09:42:41.884457 | orchestrator | 09:42:41.884 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-19 09:42:41.884488 | orchestrator | 09:42:41.884 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-19 09:42:41.884519 | orchestrator | 09:42:41.884 STDOUT terraform:  + mac = (known after apply) 2025-06-19 09:42:41.884549 | orchestrator | 09:42:41.884 STDOUT terraform:  + name = (known after apply) 2025-06-19 09:42:41.884579 | orchestrator | 09:42:41.884 STDOUT terraform:  + port = (known after apply) 2025-06-19 09:42:41.884608 | orchestrator | 09:42:41.884 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.884613 | orchestrator | 09:42:41.884 STDOUT terraform:  } 2025-06-19 09:42:41.884621 | orchestrator | 09:42:41.884 STDOUT terraform:  } 2025-06-19 09:42:41.884668 | orchestrator | 09:42:41.884 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-19 09:42:41.884708 | orchestrator | 09:42:41.884 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-19 09:42:41.884741 | orchestrator | 09:42:41.884 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-19 09:42:41.884774 | orchestrator | 09:42:41.884 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-19 09:42:41.884818 | orchestrator | 09:42:41.884 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-19 09:42:41.884873 | orchestrator | 09:42:41.884 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.884898 | orchestrator | 09:42:41.884 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.884917 | orchestrator | 09:42:41.884 STDOUT terraform:  + config_drive = true 2025-06-19 09:42:41.884949 | orchestrator | 09:42:41.884 STDOUT terraform:  + created = (known after apply) 2025-06-19 09:42:41.884983 | orchestrator | 09:42:41.884 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-19 09:42:41.885013 | orchestrator | 09:42:41.884 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-19 09:42:41.885033 | orchestrator | 09:42:41.885 STDOUT terraform:  + force_delete = false 2025-06-19 09:42:41.885066 | orchestrator | 09:42:41.885 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-19 09:42:41.885102 | orchestrator | 09:42:41.885 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.885135 | orchestrator | 09:42:41.885 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.885169 | orchestrator | 09:42:41.885 STDOUT terraform:  + image_name = (known after apply) 2025-06-19 09:42:41.885194 | orchestrator | 09:42:41.885 STDOUT terraform:  + key_pair = "testbed" 2025-06-19 09:42:41.885223 | orchestrator | 09:42:41.885 STDOUT terraform:  + name = "testbed-node-3" 2025-06-19 09:42:41.885242 | orchestrator | 09:42:41.885 STDOUT terraform:  + power_state = "active" 2025-06-19 09:42:41.885276 | orchestrator | 09:42:41.885 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.885310 | orchestrator | 09:42:41.885 STDOUT terraform:  + security_groups = (known after apply) 2025-06-19 09:42:41.885329 | orchestrator | 09:42:41.885 STDOUT terraform:  + stop_before_destroy = false 2025-06-19 09:42:41.885363 | orchestrator | 09:42:41.885 STDOUT terraform:  + updated = (known after apply) 2025-06-19 09:42:41.885411 | orchestrator | 09:42:41.885 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-19 09:42:41.885418 | orchestrator | 09:42:41.885 STDOUT terraform:  + block_device { 2025-06-19 09:42:41.885447 | orchestrator | 09:42:41.885 STDOUT terraform:  + boot_index = 0 2025-06-19 09:42:41.885475 | orchestrator | 09:42:41.885 STDOUT terraform:  + delete_on_termination = false 2025-06-19 09:42:41.885503 | orchestrator | 09:42:41.885 STDOUT terraform:  + destination_type = "volume" 2025-06-19 09:42:41.885529 | orchestrator | 09:42:41.885 STDOUT terraform:  + multiattach = false 2025-06-19 09:42:41.885559 | orchestrator | 09:42:41.885 STDOUT terraform:  + source_type = "volume" 2025-06-19 09:42:41.885597 | orchestrator | 09:42:41.885 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.885604 | orchestrator | 09:42:41.885 STDOUT terraform:  } 2025-06-19 09:42:41.885609 | orchestrator | 09:42:41.885 STDOUT terraform:  + network { 2025-06-19 09:42:41.885636 | orchestrator | 09:42:41.885 STDOUT terraform:  + access_network = false 2025-06-19 09:42:41.885665 | orchestrator | 09:42:41.885 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-19 09:42:41.885695 | orchestrator | 09:42:41.885 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-19 09:42:41.885729 | orchestrator | 09:42:41.885 STDOUT terraform:  + mac = (known after apply) 2025-06-19 09:42:41.885773 | orchestrator | 09:42:41.885 STDOUT terraform:  + name = (known after apply) 2025-06-19 09:42:41.885992 | orchestrator | 09:42:41.885 STDOUT terraform:  + port = (known after apply) 2025-06-19 09:42:41.886112 | orchestrator | 09:42:41.885 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.886128 | orchestrator | 09:42:41.885 STDOUT terraform:  } 2025-06-19 09:42:41.886140 | orchestrator | 09:42:41.885 STDOUT terraform:  } 2025-06-19 09:42:41.886153 | orchestrator | 09:42:41.885 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-19 09:42:41.886176 | orchestrator | 09:42:41.885 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-19 09:42:41.886188 | orchestrator | 09:42:41.885 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-19 09:42:41.886200 | orchestrator | 09:42:41.885 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-19 09:42:41.886211 | orchestrator | 09:42:41.886 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-19 09:42:41.886222 | orchestrator | 09:42:41.886 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.886233 | orchestrator | 09:42:41.886 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.886267 | orchestrator | 09:42:41.886 STDOUT terraform:  + config_drive = true 2025-06-19 09:42:41.886291 | orchestrator | 09:42:41.886 STDOUT terraform:  + created = (known after apply) 2025-06-19 09:42:41.886307 | orchestrator | 09:42:41.886 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-19 09:42:41.886318 | orchestrator | 09:42:41.886 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-19 09:42:41.886340 | orchestrator | 09:42:41.886 STDOUT terraform:  + force_delete = false 2025-06-19 09:42:41.886351 | orchestrator | 09:42:41.886 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-19 09:42:41.886362 | orchestrator | 09:42:41.886 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.886377 | orchestrator | 09:42:41.886 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.886394 | orchestrator | 09:42:41.886 STDOUT terraform:  + image_name = (known after apply) 2025-06-19 09:42:41.886405 | orchestrator | 09:42:41.886 STDOUT terraform:  + key_pair = "testbed" 2025-06-19 09:42:41.886420 | orchestrator | 09:42:41.886 STDOUT terraform:  + name = "testbed-node-4" 2025-06-19 09:42:41.886431 | orchestrator | 09:42:41.886 STDOUT terraform:  + power_state = "active" 2025-06-19 09:42:41.886446 | orchestrator | 09:42:41.886 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.886485 | orchestrator | 09:42:41.886 STDOUT terraform:  + security_groups = (known after apply) 2025-06-19 09:42:41.886501 | orchestrator | 09:42:41.886 STDOUT terraform:  + stop_before_destroy = false 2025-06-19 09:42:41.886533 | orchestrator | 09:42:41.886 STDOUT terraform:  + updated = (known after apply) 2025-06-19 09:42:41.886622 | orchestrator | 09:42:41.886 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-19 09:42:41.886638 | orchestrator | 09:42:41.886 STDOUT terraform:  + block_device { 2025-06-19 09:42:41.886650 | orchestrator | 09:42:41.886 STDOUT terraform:  + boot_index = 0 2025-06-19 09:42:41.886665 | orchestrator | 09:42:41.886 STDOUT terraform:  + delete_on_termination = false 2025-06-19 09:42:41.886676 | orchestrator | 09:42:41.886 STDOUT terraform:  + destination_type = "volume" 2025-06-19 09:42:41.886691 | orchestrator | 09:42:41.886 STDOUT terraform:  + multiattach = false 2025-06-19 09:42:41.886705 | orchestrator | 09:42:41.886 STDOUT terraform:  + source_type = "volume" 2025-06-19 09:42:41.886754 | orchestrator | 09:42:41.886 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.886766 | orchestrator | 09:42:41.886 STDOUT terraform:  } 2025-06-19 09:42:41.886778 | orchestrator | 09:42:41.886 STDOUT terraform:  + network { 2025-06-19 09:42:41.886793 | orchestrator | 09:42:41.886 STDOUT terraform:  + access_network = false 2025-06-19 09:42:41.886835 | orchestrator | 09:42:41.886 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-19 09:42:41.886850 | orchestrator | 09:42:41.886 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-19 09:42:41.886949 | orchestrator | 09:42:41.886 STDOUT terraform:  + mac = (known after apply) 2025-06-19 09:42:41.886977 | orchestrator | 09:42:41.886 STDOUT terraform:  + name = (known after apply) 2025-06-19 09:42:41.886992 | orchestrator | 09:42:41.886 STDOUT terraform:  + port = (known after apply) 2025-06-19 09:42:41.887007 | orchestrator | 09:42:41.886 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.887021 | orchestrator | 09:42:41.887 STDOUT terraform:  } 2025-06-19 09:42:41.887036 | orchestrator | 09:42:41.887 STDOUT terraform:  } 2025-06-19 09:42:41.887084 | orchestrator | 09:42:41.887 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-19 09:42:41.887141 | orchestrator | 09:42:41.887 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-19 09:42:41.887154 | orchestrator | 09:42:41.887 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-19 09:42:41.887169 | orchestrator | 09:42:41.887 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-19 09:42:41.887213 | orchestrator | 09:42:41.887 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-19 09:42:41.887229 | orchestrator | 09:42:41.887 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.887244 | orchestrator | 09:42:41.887 STDOUT terraform:  + availability_zone = "nova" 2025-06-19 09:42:41.887299 | orchestrator | 09:42:41.887 STDOUT terraform:  + config_drive = true 2025-06-19 09:42:41.887312 | orchestrator | 09:42:41.887 STDOUT terraform:  + created = (known after apply) 2025-06-19 09:42:41.887326 | orchestrator | 09:42:41.887 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-19 09:42:41.887371 | orchestrator | 09:42:41.887 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-19 09:42:41.887384 | orchestrator | 09:42:41.887 STDOUT terraform:  + force_delete = false 2025-06-19 09:42:41.887406 | orchestrator | 09:42:41.887 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-19 09:42:41.887426 | orchestrator | 09:42:41.887 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.887484 | orchestrator | 09:42:41.887 STDOUT terraform:  + image_id = (known after apply) 2025-06-19 09:42:41.887501 | orchestrator | 09:42:41.887 STDOUT terraform:  + image_name = (known after apply) 2025-06-19 09:42:41.887516 | orchestrator | 09:42:41.887 STDOUT terraform:  + key_pair = "testbed" 2025-06-19 09:42:41.887541 | orchestrator | 09:42:41.887 STDOUT terraform:  + name = "testbed-node-5" 2025-06-19 09:42:41.887557 | orchestrator | 09:42:41.887 STDOUT terraform:  + power_state = "active" 2025-06-19 09:42:41.887613 | orchestrator | 09:42:41.887 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.887630 | orchestrator | 09:42:41.887 STDOUT terraform:  + security_groups = (known after apply) 2025-06-19 09:42:41.887645 | orchestrator | 09:42:41.887 STDOUT terraform:  + stop_before_destroy = false 2025-06-19 09:42:41.887688 | orchestrator | 09:42:41.887 STDOUT terraform:  + updated = (known after apply) 2025-06-19 09:42:41.887745 | orchestrator | 09:42:41.887 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-19 09:42:41.887759 | orchestrator | 09:42:41.887 STDOUT terraform:  + block_device { 2025-06-19 09:42:41.887782 | orchestrator | 09:42:41.887 STDOUT terraform:  + boot_index = 0 2025-06-19 09:42:41.887794 | orchestrator | 09:42:41.887 STDOUT terraform:  + delete_on_termination = false 2025-06-19 09:42:41.887826 | orchestrator | 09:42:41.887 STDOUT terraform:  + destination_type = "volume" 2025-06-19 09:42:41.887841 | orchestrator | 09:42:41.887 STDOUT terraform:  + multiattach = false 2025-06-19 09:42:41.888621 | orchestrator | 09:42:41.887 STDOUT terraform:  + source_type = "volume" 2025-06-19 09:42:41.888639 | orchestrator | 09:42:41.888 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.888656 | orchestrator | 09:42:41.888 STDOUT terraform:  } 2025-06-19 09:42:41.888669 | orchestrator | 09:42:41.888 STDOUT terraform:  + network { 2025-06-19 09:42:41.888682 | orchestrator | 09:42:41.888 STDOUT terraform:  + access_network = false 2025-06-19 09:42:41.888698 | orchestrator | 09:42:41.888 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-19 09:42:41.888710 | orchestrator | 09:42:41.888 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-19 09:42:41.888726 | orchestrator | 09:42:41.888 STDOUT terraform:  + mac = (known after apply) 2025-06-19 09:42:41.888773 | orchestrator | 09:42:41.888 STDOUT terraform:  + name = (known after apply) 2025-06-19 09:42:41.888830 | orchestrator | 09:42:41.888 STDOUT terraform:  + port = (known after apply) 2025-06-19 09:42:41.888849 | orchestrator | 09:42:41.888 STDOUT terraform:  + uuid = (known after apply) 2025-06-19 09:42:41.888862 | orchestrator | 09:42:41.888 STDOUT terraform:  } 2025-06-19 09:42:41.888876 | orchestrator | 09:42:41.888 STDOUT terraform:  } 2025-06-19 09:42:41.888891 | orchestrator | 09:42:41.888 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-19 09:42:41.888939 | orchestrator | 09:42:41.888 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-19 09:42:41.888955 | orchestrator | 09:42:41.888 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-19 09:42:41.888970 | orchestrator | 09:42:41.888 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.889013 | orchestrator | 09:42:41.888 STDOUT terraform:  + name = "testbed" 2025-06-19 09:42:41.889026 | orchestrator | 09:42:41.888 STDOUT terraform:  + private_key = (sensitive value) 2025-06-19 09:42:41.889041 | orchestrator | 09:42:41.889 STDOUT terraform:  + public_key = (known after apply) 2025-06-19 09:42:41.889055 | orchestrator | 09:42:41.889 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.889105 | orchestrator | 09:42:41.889 STDOUT terraform:  + user_id = (known after apply) 2025-06-19 09:42:41.889118 | orchestrator | 09:42:41.889 STDOUT terraform:  } 2025-06-19 09:42:41.889174 | orchestrator | 09:42:41.889 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-19 09:42:41.889192 | orchestrator | 09:42:41.889 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-19 09:42:41.889207 | orchestrator | 09:42:41.889 STDOUT terraform:  + device = (known after apply) 2025-06-19 09:42:41.889250 | orchestrator | 09:42:41.889 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.889272 | orchestrator | 09:42:41.889 STDOUT terraform:  + instance_id = (known after apply) 2025-06-19 09:42:41.889287 | orchestrator | 09:42:41.889 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.889301 | orchestrator | 09:42:41.889 STDOUT terraform:  + volume_id = (known after apply) 2025-06-19 09:42:41.889315 | orchestrator | 09:42:41.889 STDOUT terraform:  } 2025-06-19 09:42:41.889444 | orchestrator | 09:42:41.889 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-19 09:42:41.889480 | orchestrator | 09:42:41.889 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-19 09:42:41.889493 | orchestrator | 09:42:41.889 STDOUT terraform:  + device = (known after apply) 2025-06-19 09:42:41.889498 | orchestrator | 09:42:41.889 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.889502 | orchestrator | 09:42:41.889 STDOUT terraform:  + instance_id = (known after apply) 2025-06-19 09:42:41.889508 | orchestrator | 09:42:41.889 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.889531 | orchestrator | 09:42:41.889 STDOUT terraform:  + volume_id = (known after apply) 2025-06-19 09:42:41.889538 | orchestrator | 09:42:41.889 STDOUT terraform:  } 2025-06-19 09:42:41.889602 | orchestrator | 09:42:41.889 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-19 09:42:41.889638 | orchestrator | 09:42:41.889 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-19 09:42:41.889664 | orchestrator | 09:42:41.889 STDOUT terraform:  + device = (known after apply) 2025-06-19 09:42:41.889692 | orchestrator | 09:42:41.889 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.889719 | orchestrator | 09:42:41.889 STDOUT terraform:  + instance_id = (known after apply) 2025-06-19 09:42:41.889748 | orchestrator | 09:42:41.889 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.889776 | orchestrator | 09:42:41.889 STDOUT terraform:  + volume_id = (known after apply) 2025-06-19 09:42:41.889781 | orchestrator | 09:42:41.889 STDOUT terraform:  } 2025-06-19 09:42:41.889861 | orchestrator | 09:42:41.889 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-19 09:42:41.889904 | orchestrator | 09:42:41.889 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-19 09:42:41.889931 | orchestrator | 09:42:41.889 STDOUT terraform:  + device = (known after apply) 2025-06-19 09:42:41.889958 | orchestrator | 09:42:41.889 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.889985 | orchestrator | 09:42:41.889 STDOUT terraform:  + instance_id = (known after apply) 2025-06-19 09:42:41.890047 | orchestrator | 09:42:41.889 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.890059 | orchestrator | 09:42:41.890 STDOUT terraform:  + volume_id = (known after apply) 2025-06-19 09:42:41.890067 | orchestrator | 09:42:41.890 STDOUT terraform:  } 2025-06-19 09:42:41.890117 | orchestrator | 09:42:41.890 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-19 09:42:41.890163 | orchestrator | 09:42:41.890 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-19 09:42:41.890192 | orchestrator | 09:42:41.890 STDOUT terraform:  + device = (known after apply) 2025-06-19 09:42:41.890219 | orchestrator | 09:42:41.890 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.890247 | orchestrator | 09:42:41.890 STDOUT terraform:  + instance_id = (known after apply) 2025-06-19 09:42:41.890274 | orchestrator | 09:42:41.890 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.890301 | orchestrator | 09:42:41.890 STDOUT terraform:  + volume_id = (known after apply) 2025-06-19 09:42:41.890306 | orchestrator | 09:42:41.890 STDOUT terraform:  } 2025-06-19 09:42:41.890357 | orchestrator | 09:42:41.890 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-19 09:42:41.890403 | orchestrator | 09:42:41.890 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-19 09:42:41.890431 | orchestrator | 09:42:41.890 STDOUT terraform:  + device = (known after apply) 2025-06-19 09:42:41.890458 | orchestrator | 09:42:41.890 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.890484 | orchestrator | 09:42:41.890 STDOUT terraform:  + instance_id = (known after apply) 2025-06-19 09:42:41.890513 | orchestrator | 09:42:41.890 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.890539 | orchestrator | 09:42:41.890 STDOUT terraform:  + volume_id = (known after apply) 2025-06-19 09:42:41.890544 | orchestrator | 09:42:41.890 STDOUT terraform:  } 2025-06-19 09:42:41.890598 | orchestrator | 09:42:41.890 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-19 09:42:41.890641 | orchestrator | 09:42:41.890 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-19 09:42:41.890668 | orchestrator | 09:42:41.890 STDOUT terraform:  + device = (known after apply) 2025-06-19 09:42:41.890696 | orchestrator | 09:42:41.890 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.890723 | orchestrator | 09:42:41.890 STDOUT terraform:  + instance_id = (known after apply) 2025-06-19 09:42:41.890751 | orchestrator | 09:42:41.890 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.890778 | orchestrator | 09:42:41.890 STDOUT terraform:  + volume_id = (known after apply) 2025-06-19 09:42:41.890784 | orchestrator | 09:42:41.890 STDOUT terraform:  } 2025-06-19 09:42:41.890851 | orchestrator | 09:42:41.890 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-19 09:42:41.890896 | orchestrator | 09:42:41.890 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-19 09:42:41.890922 | orchestrator | 09:42:41.890 STDOUT terraform:  + device = (known after apply) 2025-06-19 09:42:41.890949 | orchestrator | 09:42:41.890 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.890977 | orchestrator | 09:42:41.890 STDOUT terraform:  + instance_id = (known after apply) 2025-06-19 09:42:41.891004 | orchestrator | 09:42:41.890 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.891029 | orchestrator | 09:42:41.890 STDOUT terraform:  + volume_id = (known after apply) 2025-06-19 09:42:41.891033 | orchestrator | 09:42:41.891 STDOUT terraform:  } 2025-06-19 09:42:41.891083 | orchestrator | 09:42:41.891 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-19 09:42:41.891128 | orchestrator | 09:42:41.891 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-19 09:42:41.891162 | orchestrator | 09:42:41.891 STDOUT terraform:  + device = (known after apply) 2025-06-19 09:42:41.891182 | orchestrator | 09:42:41.891 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.891206 | orchestrator | 09:42:41.891 STDOUT terraform:  + instance_id = (known after apply) 2025-06-19 09:42:41.891233 | orchestrator | 09:42:41.891 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.891259 | orchestrator | 09:42:41.891 STDOUT terraform:  + volume_id = (known after apply) 2025-06-19 09:42:41.891266 | orchestrator | 09:42:41.891 STDOUT terraform:  } 2025-06-19 09:42:41.891327 | orchestrator | 09:42:41.891 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-19 09:42:41.891378 | orchestrator | 09:42:41.891 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-19 09:42:41.891406 | orchestrator | 09:42:41.891 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-19 09:42:41.891432 | orchestrator | 09:42:41.891 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-19 09:42:41.891461 | orchestrator | 09:42:41.891 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.891488 | orchestrator | 09:42:41.891 STDOUT terraform:  + port_id = (known after apply) 2025-06-19 09:42:41.891515 | orchestrator | 09:42:41.891 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.891520 | orchestrator | 09:42:41.891 STDOUT terraform:  } 2025-06-19 09:42:41.891569 | orchestrator | 09:42:41.891 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-19 09:42:41.891614 | orchestrator | 09:42:41.891 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-19 09:42:41.891638 | orchestrator | 09:42:41.891 STDOUT terraform:  + address = (known after apply) 2025-06-19 09:42:41.891663 | orchestrator | 09:42:41.891 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.891688 | orchestrator | 09:42:41.891 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-19 09:42:41.891707 | orchestrator | 09:42:41.891 STDOUT terraform:  + dns_name = (known after apply) 2025-06-19 09:42:41.891732 | orchestrator | 09:42:41.891 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-19 09:42:41.891756 | orchestrator | 09:42:41.891 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.891763 | orchestrator | 09:42:41.891 STDOUT terraform:  + pool = "public" 2025-06-19 09:42:41.891796 | orchestrator | 09:42:41.891 STDOUT terraform:  + port_id = (known after apply) 2025-06-19 09:42:41.891932 | orchestrator | 09:42:41.891 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.891968 | orchestrator | 09:42:41.891 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-19 09:42:41.891992 | orchestrator | 09:42:41.891 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.892004 | orchestrator | 09:42:41.891 STDOUT terraform:  } 2025-06-19 09:42:41.892021 | orchestrator | 09:42:41.891 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-19 09:42:41.892033 | orchestrator | 09:42:41.891 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-19 09:42:41.892044 | orchestrator | 09:42:41.891 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-19 09:42:41.892057 | orchestrator | 09:42:41.891 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.892068 | orchestrator | 09:42:41.892 STDOUT terraform:  + availability_zone_hints = [ 2025-06-19 09:42:41.892077 | orchestrator | 09:42:41.892 STDOUT terraform:  + "nova", 2025-06-19 09:42:41.892091 | orchestrator | 09:42:41.892 STDOUT terraform:  ] 2025-06-19 09:42:41.892104 | orchestrator | 09:42:41.892 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-19 09:42:41.892149 | orchestrator | 09:42:41.892 STDOUT terraform:  + external = (known after apply) 2025-06-19 09:42:41.892165 | orchestrator | 09:42:41.892 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.892217 | orchestrator | 09:42:41.892 STDOUT terraform:  + mtu = (known after apply) 2025-06-19 09:42:41.892232 | orchestrator | 09:42:41.892 STDOUT terraform:  + name = "net-testbed-management" 2025-06-19 09:42:41.892287 | orchestrator | 09:42:41.892 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-19 09:42:41.892304 | orchestrator | 09:42:41.892 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-19 09:42:41.892359 | orchestrator | 09:42:41.892 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.892390 | orchestrator | 09:42:41.892 STDOUT terraform:  + shared = (known after apply) 2025-06-19 09:42:41.892405 | orchestrator | 09:42:41.892 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.892452 | orchestrator | 09:42:41.892 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-19 09:42:41.892464 | orchestrator | 09:42:41.892 STDOUT terraform:  + segments (known after apply) 2025-06-19 09:42:41.892479 | orchestrator | 09:42:41.892 STDOUT terraform:  } 2025-06-19 09:42:41.892523 | orchestrator | 09:42:41.892 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-19 09:42:41.892580 | orchestrator | 09:42:41.892 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-19 09:42:41.892594 | orchestrator | 09:42:41.892 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-19 09:42:41.892609 | orchestrator | 09:42:41.892 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-19 09:42:41.892665 | orchestrator | 09:42:41.892 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-19 09:42:41.892682 | orchestrator | 09:42:41.892 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.892726 | orchestrator | 09:42:41.892 STDOUT terraform:  + device_id = (known after apply) 2025-06-19 09:42:41.892752 | orchestrator | 09:42:41.892 STDOUT terraform:  + device_owner = (known after apply) 2025-06-19 09:42:41.892780 | orchestrator | 09:42:41.892 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-19 09:42:41.892831 | orchestrator | 09:42:41.892 STDOUT terraform:  + dns_name = (known after apply) 2025-06-19 09:42:41.892847 | orchestrator | 09:42:41.892 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.892904 | orchestrator | 09:42:41.892 STDOUT terraform:  + mac_address = (known after apply) 2025-06-19 09:42:41.892921 | orchestrator | 09:42:41.892 STDOUT terraform:  + network_id = (known after apply) 2025-06-19 09:42:41.892966 | orchestrator | 09:42:41.892 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-19 09:42:41.892982 | orchestrator | 09:42:41.892 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-19 09:42:41.893027 | orchestrator | 09:42:41.892 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.893043 | orchestrator | 09:42:41.893 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-19 09:42:41.893088 | orchestrator | 09:42:41.893 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.893101 | orchestrator | 09:42:41.893 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.893116 | orchestrator | 09:42:41.893 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-19 09:42:41.893128 | orchestrator | 09:42:41.893 STDOUT terraform:  } 2025-06-19 09:42:41.893142 | orchestrator | 09:42:41.893 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.893157 | orchestrator | 09:42:41.893 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-19 09:42:41.893172 | orchestrator | 09:42:41.893 STDOUT terraform:  } 2025-06-19 09:42:41.893187 | orchestrator | 09:42:41.893 STDOUT terraform:  + binding (known after apply) 2025-06-19 09:42:41.893201 | orchestrator | 09:42:41.893 STDOUT terraform:  + fixed_ip { 2025-06-19 09:42:41.893216 | orchestrator | 09:42:41.893 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-19 09:42:41.893273 | orchestrator | 09:42:41.893 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-19 09:42:41.893286 | orchestrator | 09:42:41.893 STDOUT terraform:  } 2025-06-19 09:42:41.893298 | orchestrator | 09:42:41.893 STDOUT terraform:  } 2025-06-19 09:42:41.893313 | orchestrator | 09:42:41.893 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-19 09:42:41.893358 | orchestrator | 09:42:41.893 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-19 09:42:41.893375 | orchestrator | 09:42:41.893 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-19 09:42:41.893419 | orchestrator | 09:42:41.893 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-19 09:42:41.893436 | orchestrator | 09:42:41.893 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-19 09:42:41.893480 | orchestrator | 09:42:41.893 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.893496 | orchestrator | 09:42:41.893 STDOUT terraform:  + device_id = (known after apply) 2025-06-19 09:42:41.893519 | orchestrator | 09:42:41.893 STDOUT terraform:  + device_owner = (known after apply) 2025-06-19 09:42:41.893575 | orchestrator | 09:42:41.893 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-19 09:42:41.893621 | orchestrator | 09:42:41.893 STDOUT terraform:  + dns_name = (known after apply) 2025-06-19 09:42:41.893667 | orchestrator | 09:42:41.893 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.893733 | orchestrator | 09:42:41.893 STDOUT terraform:  + mac_address = (known after apply) 2025-06-19 09:42:41.893782 | orchestrator | 09:42:41.893 STDOUT terraform:  + network_id = (known after apply) 2025-06-19 09:42:41.893829 | orchestrator | 09:42:41.893 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-19 09:42:41.893898 | orchestrator | 09:42:41.893 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-19 09:42:41.893945 | orchestrator | 09:42:41.893 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.894007 | orchestrator | 09:42:41.893 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-19 09:42:41.894083 | orchestrator | 09:42:41.893 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.894123 | orchestrator | 09:42:41.894 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.894171 | orchestrator | 09:42:41.894 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-19 09:42:41.894188 | orchestrator | 09:42:41.894 STDOUT terraform:  } 2025-06-19 09:42:41.894246 | orchestrator | 09:42:41.894 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.894297 | orchestrator | 09:42:41.894 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-19 09:42:41.894314 | orchestrator | 09:42:41.894 STDOUT terraform:  } 2025-06-19 09:42:41.894353 | orchestrator | 09:42:41.894 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.894401 | orchestrator | 09:42:41.894 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-19 09:42:41.894417 | orchestrator | 09:42:41.894 STDOUT terraform:  } 2025-06-19 09:42:41.894455 | orchestrator | 09:42:41.894 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.894501 | orchestrator | 09:42:41.894 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-19 09:42:41.894518 | orchestrator | 09:42:41.894 STDOUT terraform:  } 2025-06-19 09:42:41.894556 | orchestrator | 09:42:41.894 STDOUT terraform:  + binding (known after apply) 2025-06-19 09:42:41.894569 | orchestrator | 09:42:41.894 STDOUT terraform:  + fixed_ip { 2025-06-19 09:42:41.894584 | orchestrator | 09:42:41.894 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-19 09:42:41.894599 | orchestrator | 09:42:41.894 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-19 09:42:41.894613 | orchestrator | 09:42:41.894 STDOUT terraform:  } 2025-06-19 09:42:41.894628 | orchestrator | 09:42:41.894 STDOUT terraform:  } 2025-06-19 09:42:41.894672 | orchestrator | 09:42:41.894 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-19 09:42:41.894716 | orchestrator | 09:42:41.894 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-19 09:42:41.894741 | orchestrator | 09:42:41.894 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-19 09:42:41.894783 | orchestrator | 09:42:41.894 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-19 09:42:41.894823 | orchestrator | 09:42:41.894 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-19 09:42:41.894876 | orchestrator | 09:42:41.894 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.894894 | orchestrator | 09:42:41.894 STDOUT terraform:  + device_id = (known after apply) 2025-06-19 09:42:41.894937 | orchestrator | 09:42:41.894 STDOUT terraform:  + device_owner = (known after apply) 2025-06-19 09:42:41.894963 | orchestrator | 09:42:41.894 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-19 09:42:41.895004 | orchestrator | 09:42:41.894 STDOUT terraform:  + dns_name = (known after apply) 2025-06-19 09:42:41.895042 | orchestrator | 09:42:41.894 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.895079 | orchestrator | 09:42:41.895 STDOUT terraform:  + mac_address = (known after apply) 2025-06-19 09:42:41.895118 | orchestrator | 09:42:41.895 STDOUT terraform:  + network_id = (known after apply) 2025-06-19 09:42:41.895134 | orchestrator | 09:42:41.895 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-19 09:42:41.895178 | orchestrator | 09:42:41.895 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-19 09:42:41.895216 | orchestrator | 09:42:41.895 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.895254 | orchestrator | 09:42:41.895 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-19 09:42:41.895270 | orchestrator | 09:42:41.895 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.895297 | orchestrator | 09:42:41.895 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.895337 | orchestrator | 09:42:41.895 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-19 09:42:41.895354 | orchestrator | 09:42:41.895 STDOUT terraform:  } 2025-06-19 09:42:41.895369 | orchestrator | 09:42:41.895 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.895384 | orchestrator | 09:42:41.895 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-19 09:42:41.895398 | orchestrator | 09:42:41.895 STDOUT terraform:  } 2025-06-19 09:42:41.895412 | orchestrator | 09:42:41.895 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.895450 | orchestrator | 09:42:41.895 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-19 09:42:41.895462 | orchestrator | 09:42:41.895 STDOUT terraform:  } 2025-06-19 09:42:41.895477 | orchestrator | 09:42:41.895 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.895492 | orchestrator | 09:42:41.895 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-19 09:42:41.895503 | orchestrator | 09:42:41.895 STDOUT terraform:  } 2025-06-19 09:42:41.895517 | orchestrator | 09:42:41.895 STDOUT terraform:  + binding (known after apply) 2025-06-19 09:42:41.895531 | orchestrator | 09:42:41.895 STDOUT terraform:  + fixed_ip { 2025-06-19 09:42:41.895554 | orchestrator | 09:42:41.895 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-19 09:42:41.895569 | orchestrator | 09:42:41.895 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-19 09:42:41.895583 | orchestrator | 09:42:41.895 STDOUT terraform:  } 2025-06-19 09:42:41.895595 | orchestrator | 09:42:41.895 STDOUT terraform:  } 2025-06-19 09:42:41.895636 | orchestrator | 09:42:41.895 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-19 09:42:41.895679 | orchestrator | 09:42:41.895 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-19 09:42:41.895717 | orchestrator | 09:42:41.895 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-19 09:42:41.895736 | orchestrator | 09:42:41.895 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-19 09:42:41.895780 | orchestrator | 09:42:41.895 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-19 09:42:41.895912 | orchestrator | 09:42:41.895 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.895933 | orchestrator | 09:42:41.895 STDOUT terraform:  + device_id = (known after apply) 2025-06-19 09:42:41.895942 | orchestrator | 09:42:41.895 STDOUT terraform:  + device_owner = (known after apply) 2025-06-19 09:42:41.895947 | orchestrator | 09:42:41.895 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-19 09:42:41.895979 | orchestrator | 09:42:41.895 STDOUT terraform:  + dns_name = (known after apply) 2025-06-19 09:42:41.896014 | orchestrator | 09:42:41.895 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.896049 | orchestrator | 09:42:41.896 STDOUT terraform:  + mac_address = (known after apply) 2025-06-19 09:42:41.896084 | orchestrator | 09:42:41.896 STDOUT terraform:  + network_id = (known after apply) 2025-06-19 09:42:41.896118 | orchestrator | 09:42:41.896 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-19 09:42:41.896153 | orchestrator | 09:42:41.896 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-19 09:42:41.896189 | orchestrator | 09:42:41.896 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.896223 | orchestrator | 09:42:41.896 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-19 09:42:41.896258 | orchestrator | 09:42:41.896 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.896278 | orchestrator | 09:42:41.896 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.896307 | orchestrator | 09:42:41.896 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-19 09:42:41.896314 | orchestrator | 09:42:41.896 STDOUT terraform:  } 2025-06-19 09:42:41.896335 | orchestrator | 09:42:41.896 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.896363 | orchestrator | 09:42:41.896 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-19 09:42:41.896370 | orchestrator | 09:42:41.896 STDOUT terraform:  } 2025-06-19 09:42:41.896391 | orchestrator | 09:42:41.896 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.896419 | orchestrator | 09:42:41.896 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-19 09:42:41.896432 | orchestrator | 09:42:41.896 STDOUT terraform:  } 2025-06-19 09:42:41.896450 | orchestrator | 09:42:41.896 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.896477 | orchestrator | 09:42:41.896 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-19 09:42:41.896483 | orchestrator | 09:42:41.896 STDOUT terraform:  } 2025-06-19 09:42:41.896507 | orchestrator | 09:42:41.896 STDOUT terraform:  + binding (known after apply) 2025-06-19 09:42:41.896514 | orchestrator | 09:42:41.896 STDOUT terraform:  + fixed_ip { 2025-06-19 09:42:41.896542 | orchestrator | 09:42:41.896 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-19 09:42:41.896571 | orchestrator | 09:42:41.896 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-19 09:42:41.896577 | orchestrator | 09:42:41.896 STDOUT terraform:  } 2025-06-19 09:42:41.896593 | orchestrator | 09:42:41.896 STDOUT terraform:  } 2025-06-19 09:42:41.896639 | orchestrator | 09:42:41.896 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-19 09:42:41.896682 | orchestrator | 09:42:41.896 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-19 09:42:41.896716 | orchestrator | 09:42:41.896 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-19 09:42:41.896750 | orchestrator | 09:42:41.896 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-19 09:42:41.896784 | orchestrator | 09:42:41.896 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-19 09:42:41.896834 | orchestrator | 09:42:41.896 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.896868 | orchestrator | 09:42:41.896 STDOUT terraform:  + device_id = (known after apply) 2025-06-19 09:42:41.896902 | orchestrator | 09:42:41.896 STDOUT terraform:  + device_owner = (known after apply) 2025-06-19 09:42:41.896941 | orchestrator | 09:42:41.896 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-19 09:42:41.896973 | orchestrator | 09:42:41.896 STDOUT terraform:  + dns_name = (known after apply) 2025-06-19 09:42:41.897016 | orchestrator | 09:42:41.896 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.897049 | orchestrator | 09:42:41.897 STDOUT terraform:  + mac_address = (known after apply) 2025-06-19 09:42:41.897085 | orchestrator | 09:42:41.897 STDOUT terraform:  + network_id = (known after apply) 2025-06-19 09:42:41.897231 | orchestrator | 09:42:41.897 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-19 09:42:41.897340 | orchestrator | 09:42:41.897 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-19 09:42:41.897415 | orchestrator | 09:42:41.897 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.897486 | orchestrator | 09:42:41.897 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-19 09:42:41.897558 | orchestrator | 09:42:41.897 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.897596 | orchestrator | 09:42:41.897 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.897689 | orchestrator | 09:42:41.897 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-19 09:42:41.897708 | orchestrator | 09:42:41.897 STDOUT terraform:  } 2025-06-19 09:42:41.897747 | orchestrator | 09:42:41.897 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.897816 | orchestrator | 09:42:41.897 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-19 09:42:41.897839 | orchestrator | 09:42:41.897 STDOUT terraform:  } 2025-06-19 09:42:41.897876 | orchestrator | 09:42:41.897 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.897930 | orchestrator | 09:42:41.897 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-19 09:42:41.897956 | orchestrator | 09:42:41.897 STDOUT terraform:  } 2025-06-19 09:42:41.897992 | orchestrator | 09:42:41.897 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.898062 | orchestrator | 09:42:41.897 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-19 09:42:41.898086 | orchestrator | 09:42:41.898 STDOUT terraform:  } 2025-06-19 09:42:41.898127 | orchestrator | 09:42:41.898 STDOUT terraform:  + binding (known after apply) 2025-06-19 09:42:41.898151 | orchestrator | 09:42:41.898 STDOUT terraform:  + fixed_ip { 2025-06-19 09:42:41.898197 | orchestrator | 09:42:41.898 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-19 09:42:41.898249 | orchestrator | 09:42:41.898 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-19 09:42:41.898272 | orchestrator | 09:42:41.898 STDOUT terraform:  } 2025-06-19 09:42:41.898293 | orchestrator | 09:42:41.898 STDOUT terraform:  } 2025-06-19 09:42:41.898383 | orchestrator | 09:42:41.898 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-19 09:42:41.898473 | orchestrator | 09:42:41.898 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-19 09:42:41.898539 | orchestrator | 09:42:41.898 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-19 09:42:41.898605 | orchestrator | 09:42:41.898 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-19 09:42:41.898669 | orchestrator | 09:42:41.898 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-19 09:42:41.898735 | orchestrator | 09:42:41.898 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.898818 | orchestrator | 09:42:41.898 STDOUT terraform:  + device_id = (known after apply) 2025-06-19 09:42:41.898887 | orchestrator | 09:42:41.898 STDOUT terraform:  + device_owner = (known after apply) 2025-06-19 09:42:41.898947 | orchestrator | 09:42:41.898 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-19 09:42:41.899014 | orchestrator | 09:42:41.898 STDOUT terraform:  + dns_name = (known after apply) 2025-06-19 09:42:41.899082 | orchestrator | 09:42:41.899 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.899148 | orchestrator | 09:42:41.899 STDOUT terraform:  + mac_address = (known after apply) 2025-06-19 09:42:41.899217 | orchestrator | 09:42:41.899 STDOUT terraform:  + network_id = (known after apply) 2025-06-19 09:42:41.899286 | orchestrator | 09:42:41.899 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-19 09:42:41.899344 | orchestrator | 09:42:41.899 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-19 09:42:41.899413 | orchestrator | 09:42:41.899 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.899478 | orchestrator | 09:42:41.899 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-19 09:42:41.899543 | orchestrator | 09:42:41.899 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.899577 | orchestrator | 09:42:41.899 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.899630 | orchestrator | 09:42:41.899 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-19 09:42:41.899652 | orchestrator | 09:42:41.899 STDOUT terraform:  } 2025-06-19 09:42:41.899687 | orchestrator | 09:42:41.899 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.899742 | orchestrator | 09:42:41.899 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-19 09:42:41.899766 | orchestrator | 09:42:41.899 STDOUT terraform:  } 2025-06-19 09:42:41.899814 | orchestrator | 09:42:41.899 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.899864 | orchestrator | 09:42:41.899 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-19 09:42:41.899887 | orchestrator | 09:42:41.899 STDOUT terraform:  } 2025-06-19 09:42:41.899921 | orchestrator | 09:42:41.899 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.899971 | orchestrator | 09:42:41.899 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-19 09:42:41.899994 | orchestrator | 09:42:41.899 STDOUT terraform:  } 2025-06-19 09:42:41.900036 | orchestrator | 09:42:41.899 STDOUT terraform:  + binding (known after apply) 2025-06-19 09:42:41.900060 | orchestrator | 09:42:41.900 STDOUT terraform:  + fixed_ip { 2025-06-19 09:42:41.900104 | orchestrator | 09:42:41.900 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-19 09:42:41.900158 | orchestrator | 09:42:41.900 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-19 09:42:41.900184 | orchestrator | 09:42:41.900 STDOUT terraform:  } 2025-06-19 09:42:41.900207 | orchestrator | 09:42:41.900 STDOUT terraform:  } 2025-06-19 09:42:41.900294 | orchestrator | 09:42:41.900 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-19 09:42:41.900378 | orchestrator | 09:42:41.900 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-19 09:42:41.900442 | orchestrator | 09:42:41.900 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-19 09:42:41.900508 | orchestrator | 09:42:41.900 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-19 09:42:41.900571 | orchestrator | 09:42:41.900 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-19 09:42:41.900638 | orchestrator | 09:42:41.900 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.900705 | orchestrator | 09:42:41.900 STDOUT terraform:  + device_id = (known after apply) 2025-06-19 09:42:41.900771 | orchestrator | 09:42:41.900 STDOUT terraform:  + device_owner = (known after apply) 2025-06-19 09:42:41.900863 | orchestrator | 09:42:41.900 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-19 09:42:41.900927 | orchestrator | 09:42:41.900 STDOUT terraform:  + dns_name = (known after apply) 2025-06-19 09:42:41.900995 | orchestrator | 09:42:41.900 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.901062 | orchestrator | 09:42:41.900 STDOUT terraform:  + mac_address = (known after apply) 2025-06-19 09:42:41.901130 | orchestrator | 09:42:41.901 STDOUT terraform:  + network_id = (known after apply) 2025-06-19 09:42:41.901192 | orchestrator | 09:42:41.901 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-19 09:42:41.901256 | orchestrator | 09:42:41.901 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-19 09:42:41.901323 | orchestrator | 09:42:41.901 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.901386 | orchestrator | 09:42:41.901 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-19 09:42:41.901452 | orchestrator | 09:42:41.901 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.901487 | orchestrator | 09:42:41.901 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.901538 | orchestrator | 09:42:41.901 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-19 09:42:41.901560 | orchestrator | 09:42:41.901 STDOUT terraform:  } 2025-06-19 09:42:41.901594 | orchestrator | 09:42:41.901 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.901646 | orchestrator | 09:42:41.901 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-19 09:42:41.901670 | orchestrator | 09:42:41.901 STDOUT terraform:  } 2025-06-19 09:42:41.901703 | orchestrator | 09:42:41.901 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.901754 | orchestrator | 09:42:41.901 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-19 09:42:41.901778 | orchestrator | 09:42:41.901 STDOUT terraform:  } 2025-06-19 09:42:41.901842 | orchestrator | 09:42:41.901 STDOUT terraform:  + allowed_address_pairs { 2025-06-19 09:42:41.901894 | orchestrator | 09:42:41.901 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-19 09:42:41.901916 | orchestrator | 09:42:41.901 STDOUT terraform:  } 2025-06-19 09:42:41.901957 | orchestrator | 09:42:41.901 STDOUT terraform:  + binding (known after apply) 2025-06-19 09:42:41.901980 | orchestrator | 09:42:41.901 STDOUT terraform:  + fixed_ip { 2025-06-19 09:42:41.902039 | orchestrator | 09:42:41.901 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-19 09:42:41.902097 | orchestrator | 09:42:41.902 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-19 09:42:41.902108 | orchestrator | 09:42:41.902 STDOUT terraform:  } 2025-06-19 09:42:41.902134 | orchestrator | 09:42:41.902 STDOUT terraform:  } 2025-06-19 09:42:41.902220 | orchestrator | 09:42:41.902 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-19 09:42:41.902309 | orchestrator | 09:42:41.902 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-19 09:42:41.902335 | orchestrator | 09:42:41.902 STDOUT terraform:  + force_destroy = false 2025-06-19 09:42:41.902388 | orchestrator | 09:42:41.902 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.902437 | orchestrator | 09:42:41.902 STDOUT terraform:  + port_id = (known after apply) 2025-06-19 09:42:41.902486 | orchestrator | 09:42:41.902 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.902533 | orchestrator | 09:42:41.902 STDOUT terraform:  + router_id = (known after apply) 2025-06-19 09:42:41.902581 | orchestrator | 09:42:41.902 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-19 09:42:41.902591 | orchestrator | 09:42:41.902 STDOUT terraform:  } 2025-06-19 09:42:41.902661 | orchestrator | 09:42:41.902 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-19 09:42:41.902724 | orchestrator | 09:42:41.902 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-19 09:42:41.902787 | orchestrator | 09:42:41.902 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-19 09:42:41.902860 | orchestrator | 09:42:41.902 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.902900 | orchestrator | 09:42:41.902 STDOUT terraform:  + availability_zone_hints = [ 2025-06-19 09:42:41.902914 | orchestrator | 09:42:41.902 STDOUT terraform:  + "nova", 2025-06-19 09:42:41.902946 | orchestrator | 09:42:41.902 STDOUT terraform:  ] 2025-06-19 09:42:41.903002 | orchestrator | 09:42:41.902 STDOUT terraform:  + distributed = (known after apply) 2025-06-19 09:42:41.903069 | orchestrator | 09:42:41.902 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-19 09:42:41.903159 | orchestrator | 09:42:41.903 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-19 09:42:41.903235 | orchestrator | 09:42:41.903 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-06-19 09:42:41.903285 | orchestrator | 09:42:41.903 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.903337 | orchestrator | 09:42:41.903 STDOUT terraform:  + name = "testbed" 2025-06-19 09:42:41.903418 | orchestrator | 09:42:41.903 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.903483 | orchestrator | 09:42:41.903 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.903536 | orchestrator | 09:42:41.903 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-19 09:42:41.903546 | orchestrator | 09:42:41.903 STDOUT terraform:  } 2025-06-19 09:42:41.903653 | orchestrator | 09:42:41.903 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-19 09:42:41.903763 | orchestrator | 09:42:41.903 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-19 09:42:41.903818 | orchestrator | 09:42:41.903 STDOUT terraform:  + description = "ssh" 2025-06-19 09:42:41.903870 | orchestrator | 09:42:41.903 STDOUT terraform:  + direction = "ingress" 2025-06-19 09:42:41.903915 | orchestrator | 09:42:41.903 STDOUT terraform:  + ethertype = "IPv4" 2025-06-19 09:42:41.903983 | orchestrator | 09:42:41.903 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.904024 | orchestrator | 09:42:41.903 STDOUT terraform:  + port_range_max = 22 2025-06-19 09:42:41.904066 | orchestrator | 09:42:41.904 STDOUT terraform:  + port_range_min = 22 2025-06-19 09:42:41.904110 | orchestrator | 09:42:41.904 STDOUT terraform:  + protocol = "tcp" 2025-06-19 09:42:41.904175 | orchestrator | 09:42:41.904 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.904243 | orchestrator | 09:42:41.904 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-19 09:42:41.904307 | orchestrator | 09:42:41.904 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-19 09:42:41.904358 | orchestrator | 09:42:41.904 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-19 09:42:41.904420 | orchestrator | 09:42:41.904 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-19 09:42:41.904485 | orchestrator | 09:42:41.904 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.904495 | orchestrator | 09:42:41.904 STDOUT terraform:  } 2025-06-19 09:42:41.904600 | orchestrator | 09:42:41.904 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-19 09:42:41.904700 | orchestrator | 09:42:41.904 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-19 09:42:41.904751 | orchestrator | 09:42:41.904 STDOUT terraform:  + description = "wireguard" 2025-06-19 09:42:41.904816 | orchestrator | 09:42:41.904 STDOUT terraform:  + direction = "ingress" 2025-06-19 09:42:41.904964 | orchestrator | 09:42:41.904 STDOUT terraform:  + ethertype = "IPv4" 2025-06-19 09:42:41.905003 | orchestrator | 09:42:41.904 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.905016 | orchestrator | 09:42:41.904 STDOUT terraform:  + port_range_max = 51820 2025-06-19 09:42:41.905033 | orchestrator | 09:42:41.904 STDOUT terraform:  + port_range_min = 51820 2025-06-19 09:42:41.905043 | orchestrator | 09:42:41.904 STDOUT terraform:  + protocol = "udp" 2025-06-19 09:42:41.905053 | orchestrator | 09:42:41.904 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.905067 | orchestrator | 09:42:41.905 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-19 09:42:41.905080 | orchestrator | 09:42:41.905 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-19 09:42:41.905145 | orchestrator | 09:42:41.905 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-19 09:42:41.905173 | orchestrator | 09:42:41.905 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-19 09:42:41.905189 | orchestrator | 09:42:41.905 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.905199 | orchestrator | 09:42:41.905 STDOUT terraform:  } 2025-06-19 09:42:41.905243 | orchestrator | 09:42:41.905 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-19 09:42:41.905297 | orchestrator | 09:42:41.905 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-19 09:42:41.905313 | orchestrator | 09:42:41.905 STDOUT terraform:  + direction = "ingress" 2025-06-19 09:42:41.905327 | orchestrator | 09:42:41.905 STDOUT terraform:  + ethertype = "IPv4" 2025-06-19 09:42:41.905351 | orchestrator | 09:42:41.905 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.905394 | orchestrator | 09:42:41.905 STDOUT terraform:  + protocol = "tcp" 2025-06-19 09:42:41.905408 | orchestrator | 09:42:41.905 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.905458 | orchestrator | 09:42:41.905 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-19 09:42:41.905474 | orchestrator | 09:42:41.905 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-19 09:42:41.905530 | orchestrator | 09:42:41.905 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-19 09:42:41.905552 | orchestrator | 09:42:41.905 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-19 09:42:41.905596 | orchestrator | 09:42:41.905 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.905612 | orchestrator | 09:42:41.905 STDOUT terraform:  } 2025-06-19 09:42:41.905634 | orchestrator | 09:42:41.905 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-19 09:42:41.905695 | orchestrator | 09:42:41.905 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-19 09:42:41.905710 | orchestrator | 09:42:41.905 STDOUT terraform:  + direction = "ingress" 2025-06-19 09:42:41.905723 | orchestrator | 09:42:41.905 STDOUT terraform:  + ethertype = "IPv4" 2025-06-19 09:42:41.905777 | orchestrator | 09:42:41.905 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.905832 | orchestrator | 09:42:41.905 STDOUT terraform:  + protocol = "udp" 2025-06-19 09:42:41.905844 | orchestrator | 09:42:41.905 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.905857 | orchestrator | 09:42:41.905 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-19 09:42:41.905906 | orchestrator | 09:42:41.905 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-19 09:42:41.905921 | orchestrator | 09:42:41.905 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-19 09:42:41.905972 | orchestrator | 09:42:41.905 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-19 09:42:41.905988 | orchestrator | 09:42:41.905 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.906000 | orchestrator | 09:42:41.905 STDOUT terraform:  } 2025-06-19 09:42:41.906095 | orchestrator | 09:42:41.905 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-19 09:42:41.906124 | orchestrator | 09:42:41.906 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-19 09:42:41.906139 | orchestrator | 09:42:41.906 STDOUT terraform:  + direction = "ingress" 2025-06-19 09:42:41.906180 | orchestrator | 09:42:41.906 STDOUT terraform:  + ethertype = "IPv4" 2025-06-19 09:42:41.906195 | orchestrator | 09:42:41.906 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.906245 | orchestrator | 09:42:41.906 STDOUT terraform:  + protocol = "icmp" 2025-06-19 09:42:41.906276 | orchestrator | 09:42:41.906 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.906297 | orchestrator | 09:42:41.906 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-19 09:42:41.906319 | orchestrator | 09:42:41.906 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-19 09:42:41.906333 | orchestrator | 09:42:41.906 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-19 09:42:41.906388 | orchestrator | 09:42:41.906 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-19 09:42:41.906404 | orchestrator | 09:42:41.906 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.906416 | orchestrator | 09:42:41.906 STDOUT terraform:  } 2025-06-19 09:42:41.906478 | orchestrator | 09:42:41.906 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-19 09:42:41.906520 | orchestrator | 09:42:41.906 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-19 09:42:41.906535 | orchestrator | 09:42:41.906 STDOUT terraform:  + direction = "ingress" 2025-06-19 09:42:41.906584 | orchestrator | 09:42:41.906 STDOUT terraform:  + ethertype = "IPv4" 2025-06-19 09:42:41.906599 | orchestrator | 09:42:41.906 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.906612 | orchestrator | 09:42:41.906 STDOUT terraform:  + protocol = "tcp" 2025-06-19 09:42:41.906665 | orchestrator | 09:42:41.906 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.906680 | orchestrator | 09:42:41.906 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-19 09:42:41.906732 | orchestrator | 09:42:41.906 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-19 09:42:41.906748 | orchestrator | 09:42:41.906 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-19 09:42:41.906787 | orchestrator | 09:42:41.906 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-19 09:42:41.906823 | orchestrator | 09:42:41.906 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.906834 | orchestrator | 09:42:41.906 STDOUT terraform:  } 2025-06-19 09:42:41.906886 | orchestrator | 09:42:41.906 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-19 09:42:41.906928 | orchestrator | 09:42:41.906 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-19 09:42:41.906943 | orchestrator | 09:42:41.906 STDOUT terraform:  + direction = "ingress" 2025-06-19 09:42:41.906956 | orchestrator | 09:42:41.906 STDOUT terraform:  + ethertype = "IPv4" 2025-06-19 09:42:41.907010 | orchestrator | 09:42:41.906 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.907025 | orchestrator | 09:42:41.906 STDOUT terraform:  + protocol = "udp" 2025-06-19 09:42:41.907065 | orchestrator | 09:42:41.907 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.907080 | orchestrator | 09:42:41.907 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-19 09:42:41.907127 | orchestrator | 09:42:41.907 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-19 09:42:41.907150 | orchestrator | 09:42:41.907 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-19 09:42:41.907205 | orchestrator | 09:42:41.907 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-19 09:42:41.907220 | orchestrator | 09:42:41.907 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.907233 | orchestrator | 09:42:41.907 STDOUT terraform:  } 2025-06-19 09:42:41.907294 | orchestrator | 09:42:41.907 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-19 09:42:41.907336 | orchestrator | 09:42:41.907 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-19 09:42:41.907350 | orchestrator | 09:42:41.907 STDOUT terraform:  + direction = "ingress" 2025-06-19 09:42:41.907400 | orchestrator | 09:42:41.907 STDOUT terraform:  + ethertype = "IPv4" 2025-06-19 09:42:41.907415 | orchestrator | 09:42:41.907 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.907428 | orchestrator | 09:42:41.907 STDOUT terraform:  + protocol = "icmp" 2025-06-19 09:42:41.907481 | orchestrator | 09:42:41.907 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.907496 | orchestrator | 09:42:41.907 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-19 09:42:41.907536 | orchestrator | 09:42:41.907 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-19 09:42:41.907552 | orchestrator | 09:42:41.907 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-19 09:42:41.907592 | orchestrator | 09:42:41.907 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-19 09:42:41.907608 | orchestrator | 09:42:41.907 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.907621 | orchestrator | 09:42:41.907 STDOUT terraform:  } 2025-06-19 09:42:41.907685 | orchestrator | 09:42:41.907 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-19 09:42:41.907750 | orchestrator | 09:42:41.907 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-19 09:42:41.907765 | orchestrator | 09:42:41.907 STDOUT terraform:  + description = "vrrp" 2025-06-19 09:42:41.907779 | orchestrator | 09:42:41.907 STDOUT terraform:  + direction = "ingress" 2025-06-19 09:42:41.907831 | orchestrator | 09:42:41.907 STDOUT terraform:  + ethertype = "IPv4" 2025-06-19 09:42:41.907880 | orchestrator | 09:42:41.907 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.907895 | orchestrator | 09:42:41.907 STDOUT terraform:  + protocol = "112" 2025-06-19 09:42:41.907908 | orchestrator | 09:42:41.907 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.907957 | orchestrator | 09:42:41.907 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-19 09:42:41.907973 | orchestrator | 09:42:41.907 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-19 09:42:41.907986 | orchestrator | 09:42:41.907 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-19 09:42:41.908036 | orchestrator | 09:42:41.907 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-19 09:42:41.908051 | orchestrator | 09:42:41.908 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.908064 | orchestrator | 09:42:41.908 STDOUT terraform:  } 2025-06-19 09:42:41.908078 | orchestrator | 09:42:41.908 STDOUT terraform:  # openstack_networking_secgroup 2025-06-19 09:42:41.908156 | orchestrator | 09:42:41.908 STDOUT terraform: _v2.security_group_management will be created 2025-06-19 09:42:41.908198 | orchestrator | 09:42:41.908 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-19 09:42:41.908212 | orchestrator | 09:42:41.908 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.908252 | orchestrator | 09:42:41.908 STDOUT terraform:  + description = "management security group" 2025-06-19 09:42:41.908266 | orchestrator | 09:42:41.908 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.908280 | orchestrator | 09:42:41.908 STDOUT terraform:  + name = "testbed-management" 2025-06-19 09:42:41.908319 | orchestrator | 09:42:41.908 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.908333 | orchestrator | 09:42:41.908 STDOUT terraform:  + stateful = (known after apply) 2025-06-19 09:42:41.908373 | orchestrator | 09:42:41.908 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.908384 | orchestrator | 09:42:41.908 STDOUT terraform:  } 2025-06-19 09:42:41.908423 | orchestrator | 09:42:41.908 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-19 09:42:41.908470 | orchestrator | 09:42:41.908 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-19 09:42:41.908484 | orchestrator | 09:42:41.908 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.908498 | orchestrator | 09:42:41.908 STDOUT terraform:  + description = "node security group" 2025-06-19 09:42:41.908549 | orchestrator | 09:42:41.908 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.908561 | orchestrator | 09:42:41.908 STDOUT terraform:  + name = "testbed-node" 2025-06-19 09:42:41.908573 | orchestrator | 09:42:41.908 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.908586 | orchestrator | 09:42:41.908 STDOUT terraform:  + stateful = (known after apply) 2025-06-19 09:42:41.908625 | orchestrator | 09:42:41.908 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.908637 | orchestrator | 09:42:41.908 STDOUT terraform:  } 2025-06-19 09:42:41.908676 | orchestrator | 09:42:41.908 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-19 09:42:41.908717 | orchestrator | 09:42:41.908 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-19 09:42:41.908732 | orchestrator | 09:42:41.908 STDOUT terraform:  + all_tags = (known after apply) 2025-06-19 09:42:41.908782 | orchestrator | 09:42:41.908 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-19 09:42:41.908794 | orchestrator | 09:42:41.908 STDOUT terraform:  + dns_nameservers = [ 2025-06-19 09:42:41.908824 | orchestrator | 09:42:41.908 STDOUT terraform:  + "8.8.8.8", 2025-06-19 09:42:41.908845 | orchestrator | 09:42:41.908 STDOUT terraform:  + "9.9.9.9", 2025-06-19 09:42:41.908856 | orchestrator | 09:42:41.908 STDOUT terraform:  ] 2025-06-19 09:42:41.908865 | orchestrator | 09:42:41.908 STDOUT terraform:  + enable_dhcp = true 2025-06-19 09:42:41.908878 | orchestrator | 09:42:41.908 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-19 09:42:41.908929 | orchestrator | 09:42:41.908 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.908975 | orchestrator | 09:42:41.908 STDOUT terraform:  + ip_version = 4 2025-06-19 09:42:41.908989 | orchestrator | 09:42:41.908 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-19 09:42:41.908999 | orchestrator | 09:42:41.908 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-19 09:42:41.909076 | orchestrator | 09:42:41.908 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-19 09:42:41.909087 | orchestrator | 09:42:41.908 STDOUT terraform:  + network_id = (known after apply) 2025-06-19 09:42:41.909096 | orchestrator | 09:42:41.909 STDOUT terraform:  + no_gateway = false 2025-06-19 09:42:41.909109 | orchestrator | 09:42:41.909 STDOUT terraform:  + region = (known after apply) 2025-06-19 09:42:41.909119 | orchestrator | 09:42:41.909 STDOUT terraform:  + service_types = (known after apply) 2025-06-19 09:42:41.909169 | orchestrator | 09:42:41.909 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-19 09:42:41.909181 | orchestrator | 09:42:41.909 STDOUT terraform:  + allocation_pool { 2025-06-19 09:42:41.909194 | orchestrator | 09:42:41.909 STDOUT terraform:  + end = "192.168.31.250" 2025-06-19 09:42:41.909207 | orchestrator | 09:42:41.909 STDOUT terraform:  + start = "192.168.31.200" 2025-06-19 09:42:41.909217 | orchestrator | 09:42:41.909 STDOUT terraform:  } 2025-06-19 09:42:41.909230 | orchestrator | 09:42:41.909 STDOUT terraform:  } 2025-06-19 09:42:41.909243 | orchestrator | 09:42:41.909 STDOUT terraform:  # terraform_data.image will be created 2025-06-19 09:42:41.909256 | orchestrator | 09:42:41.909 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-19 09:42:41.909306 | orchestrator | 09:42:41.909 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.909318 | orchestrator | 09:42:41.909 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-19 09:42:41.909331 | orchestrator | 09:42:41.909 STDOUT terraform:  + output = (known after apply) 2025-06-19 09:42:41.909341 | orchestrator | 09:42:41.909 STDOUT terraform:  } 2025-06-19 09:42:41.910601 | orchestrator | 09:42:41.909 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-19 09:42:41.910657 | orchestrator | 09:42:41.909 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-19 09:42:41.910662 | orchestrator | 09:42:41.909 STDOUT terraform:  + id = (known after apply) 2025-06-19 09:42:41.910666 | orchestrator | 09:42:41.909 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-19 09:42:41.910671 | orchestrator | 09:42:41.909 STDOUT terraform:  + output = (known after apply) 2025-06-19 09:42:41.910675 | orchestrator | 09:42:41.909 STDOUT terraform:  } 2025-06-19 09:42:41.910679 | orchestrator | 09:42:41.909 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-19 09:42:41.910690 | orchestrator | 09:42:41.909 STDOUT terraform: Changes to Outputs: 2025-06-19 09:42:41.910694 | orchestrator | 09:42:41.909 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-19 09:42:41.910698 | orchestrator | 09:42:41.909 STDOUT terraform:  + private_key = (sensitive value) 2025-06-19 09:42:42.122287 | orchestrator | 09:42:42.122 STDOUT terraform: terraform_data.image: Creating... 2025-06-19 09:42:42.122349 | orchestrator | 09:42:42.122 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=b71e018f-cb02-c2a7-acf9-25ccdf7bc023] 2025-06-19 09:42:42.122954 | orchestrator | 09:42:42.122 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-19 09:42:42.123955 | orchestrator | 09:42:42.123 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=028caf30-a14f-7d79-7d47-c3ceecf98104] 2025-06-19 09:42:42.126456 | orchestrator | 09:42:42.126 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-19 09:42:42.139198 | orchestrator | 09:42:42.138 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-19 09:42:42.140247 | orchestrator | 09:42:42.140 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-19 09:42:42.142782 | orchestrator | 09:42:42.142 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-19 09:42:42.143074 | orchestrator | 09:42:42.143 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-19 09:42:42.147986 | orchestrator | 09:42:42.147 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-19 09:42:42.148020 | orchestrator | 09:42:42.147 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-19 09:42:42.151970 | orchestrator | 09:42:42.151 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-19 09:42:42.152001 | orchestrator | 09:42:42.151 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-19 09:42:42.155351 | orchestrator | 09:42:42.155 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-19 09:42:42.581103 | orchestrator | 09:42:42.580 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-19 09:42:42.590371 | orchestrator | 09:42:42.590 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-19 09:42:42.620399 | orchestrator | 09:42:42.620 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-19 09:42:42.627105 | orchestrator | 09:42:42.626 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-19 09:42:48.135595 | orchestrator | 09:42:48.135 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=4e2ab304-e55a-4d2b-ac16-d864d8210cba] 2025-06-19 09:42:48.138089 | orchestrator | 09:42:48.137 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-19 09:42:48.191064 | orchestrator | 09:42:48.190 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-19 09:42:48.202154 | orchestrator | 09:42:48.201 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-19 09:42:52.140474 | orchestrator | 09:42:52.140 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-19 09:42:52.142808 | orchestrator | 09:42:52.142 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-19 09:42:52.144915 | orchestrator | 09:42:52.144 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-19 09:42:52.149219 | orchestrator | 09:42:52.149 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-19 09:42:52.149520 | orchestrator | 09:42:52.149 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-19 09:42:52.153519 | orchestrator | 09:42:52.153 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-19 09:42:52.156876 | orchestrator | 09:42:52.156 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-19 09:42:52.591764 | orchestrator | 09:42:52.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-19 09:42:52.628097 | orchestrator | 09:42:52.627 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-19 09:42:52.742870 | orchestrator | 09:42:52.742 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=48d47195-a07b-47d0-b7e6-8f07488663d6] 2025-06-19 09:42:52.743701 | orchestrator | 09:42:52.743 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=2f17817e-651e-4f9a-8129-c3db8254ad0b] 2025-06-19 09:42:52.753736 | orchestrator | 09:42:52.753 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-19 09:42:52.757494 | orchestrator | 09:42:52.757 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=5cdb3fff-d4f1-405f-abd7-b446ee32738c] 2025-06-19 09:42:52.757575 | orchestrator | 09:42:52.757 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-19 09:42:52.765711 | orchestrator | 09:42:52.765 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-19 09:42:52.767936 | orchestrator | 09:42:52.767 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=98d5ad263aad6b838558712f31f63ba65c95ee76] 2025-06-19 09:42:52.774804 | orchestrator | 09:42:52.774 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=6a40ab2f-d460-475a-85e2-5470cb1f2b74] 2025-06-19 09:42:52.775035 | orchestrator | 09:42:52.774 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-19 09:42:52.783309 | orchestrator | 09:42:52.783 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-19 09:42:52.786168 | orchestrator | 09:42:52.785 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=94b5082281c65628121f10749040deb4500b8059] 2025-06-19 09:42:52.788190 | orchestrator | 09:42:52.787 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=5fba7027-7a45-483b-8644-e0c0ef304581] 2025-06-19 09:42:52.789321 | orchestrator | 09:42:52.789 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=38f445f8-bcf4-4b54-8d34-faf3abd36175] 2025-06-19 09:42:52.792216 | orchestrator | 09:42:52.791 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-19 09:42:52.792329 | orchestrator | 09:42:52.792 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-19 09:42:52.798737 | orchestrator | 09:42:52.798 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-19 09:42:52.843956 | orchestrator | 09:42:52.843 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=d7da1435-c5c9-4327-bd6f-1fcfb647c27d] 2025-06-19 09:42:52.851118 | orchestrator | 09:42:52.850 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-19 09:42:52.858679 | orchestrator | 09:42:52.858 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f] 2025-06-19 09:42:52.875081 | orchestrator | 09:42:52.874 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=6c4f0114-96df-472d-8cd2-75acad9ce658] 2025-06-19 09:42:58.203420 | orchestrator | 09:42:58.203 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-19 09:42:58.542114 | orchestrator | 09:42:58.541 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=236643a8-3fbf-4a38-ac5c-7d15a0179c3a] 2025-06-19 09:42:58.739335 | orchestrator | 09:42:58.739 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=e2e58825-06a8-4251-857e-0ce960ca834b] 2025-06-19 09:42:58.746955 | orchestrator | 09:42:58.746 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-19 09:43:02.754717 | orchestrator | 09:43:02.754 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-19 09:43:02.767164 | orchestrator | 09:43:02.766 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-19 09:43:02.776361 | orchestrator | 09:43:02.776 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-19 09:43:02.794279 | orchestrator | 09:43:02.793 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-19 09:43:02.794361 | orchestrator | 09:43:02.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-19 09:43:02.851714 | orchestrator | 09:43:02.851 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-19 09:43:03.125024 | orchestrator | 09:43:03.124 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=45d6f613-e1c9-4f07-aade-2d2f7c147254] 2025-06-19 09:43:03.149182 | orchestrator | 09:43:03.148 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=32c85e8d-b71e-43db-9ec2-d353b455abf6] 2025-06-19 09:43:03.179755 | orchestrator | 09:43:03.179 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=8b95ea5b-b272-4ba4-9e64-b7a520d8cc22] 2025-06-19 09:43:03.181337 | orchestrator | 09:43:03.181 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=b01ca272-0367-4531-95c0-7a23711a0302] 2025-06-19 09:43:03.226961 | orchestrator | 09:43:03.226 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=d3db73c4-91fc-4185-92a8-f3f49747b38e] 2025-06-19 09:43:03.228492 | orchestrator | 09:43:03.228 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=7f0bdb43-8485-47f3-898e-78dda552a11f] 2025-06-19 09:43:06.601131 | orchestrator | 09:43:06.600 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=b8ff7c7f-738e-48f5-90a5-550dced9dbb6] 2025-06-19 09:43:06.609391 | orchestrator | 09:43:06.609 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-19 09:43:06.613139 | orchestrator | 09:43:06.612 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-19 09:43:06.613239 | orchestrator | 09:43:06.613 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-19 09:43:06.791608 | orchestrator | 09:43:06.791 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=9a0128ae-bc2a-4975-b952-151c37aa5ea7] 2025-06-19 09:43:06.795728 | orchestrator | 09:43:06.795 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=e502f5f6-9e09-4ddc-898a-19ac9e2f556b] 2025-06-19 09:43:06.800967 | orchestrator | 09:43:06.800 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-19 09:43:06.801572 | orchestrator | 09:43:06.801 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-19 09:43:06.802143 | orchestrator | 09:43:06.801 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-19 09:43:06.802364 | orchestrator | 09:43:06.802 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-19 09:43:06.807479 | orchestrator | 09:43:06.807 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-19 09:43:06.809011 | orchestrator | 09:43:06.808 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-19 09:43:06.811258 | orchestrator | 09:43:06.811 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-19 09:43:06.813579 | orchestrator | 09:43:06.813 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-19 09:43:06.816368 | orchestrator | 09:43:06.816 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-19 09:43:06.957895 | orchestrator | 09:43:06.957 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=8c2c948f-5ad8-4773-8461-a6dfad8f98a1] 2025-06-19 09:43:06.965466 | orchestrator | 09:43:06.965 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-19 09:43:07.076969 | orchestrator | 09:43:07.076 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=198eb861-427f-4e07-a48c-b38b4b219ef1] 2025-06-19 09:43:07.090993 | orchestrator | 09:43:07.090 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-19 09:43:07.135093 | orchestrator | 09:43:07.134 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=feb7bc75-2512-4101-9e90-3f24ed578c90] 2025-06-19 09:43:07.153776 | orchestrator | 09:43:07.153 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-19 09:43:07.229625 | orchestrator | 09:43:07.229 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=b2594243-345b-43a7-bf60-8666e43fe86b] 2025-06-19 09:43:07.243338 | orchestrator | 09:43:07.243 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-19 09:43:07.327020 | orchestrator | 09:43:07.326 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=cb818c3a-ed41-483d-a5a5-bf04cb543d1a] 2025-06-19 09:43:07.350469 | orchestrator | 09:43:07.350 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-19 09:43:07.447355 | orchestrator | 09:43:07.446 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=c0f9325e-c00d-49ce-8ef1-ac70fabab1ff] 2025-06-19 09:43:07.463432 | orchestrator | 09:43:07.463 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-19 09:43:07.479754 | orchestrator | 09:43:07.479 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=543fdb70-7d07-4f53-a48a-1b58027d5768] 2025-06-19 09:43:07.485500 | orchestrator | 09:43:07.485 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-19 09:43:07.822306 | orchestrator | 09:43:07.821 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=0457caaa-2e14-46aa-a30c-92ad084a9684] 2025-06-19 09:43:08.044172 | orchestrator | 09:43:08.043 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=b8749c74-e75b-4441-b2b4-54064628ddf9] 2025-06-19 09:43:12.468199 | orchestrator | 09:43:12.467 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=f1231e8d-4d5a-4329-9c3a-2915f047249c] 2025-06-19 09:43:12.573771 | orchestrator | 09:43:12.573 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=ddb6a568-8100-4956-b163-f8449f61f321] 2025-06-19 09:43:12.864237 | orchestrator | 09:43:12.863 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=9222de8d-7b3b-449a-bd03-f949947ebef7] 2025-06-19 09:43:13.113637 | orchestrator | 09:43:13.113 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=3ed8d97e-7b0b-4cd1-b614-f3205d16609e] 2025-06-19 09:43:13.136540 | orchestrator | 09:43:13.136 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=cfeaa5b2-ab1e-431d-979e-a8decf31e726] 2025-06-19 09:43:13.168010 | orchestrator | 09:43:13.167 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=7761f150-7be1-4f4b-b8b0-f3a54e4a4fd6] 2025-06-19 09:43:13.221959 | orchestrator | 09:43:13.221 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=9c6573a9-497a-435f-a41d-5f99630aad82] 2025-06-19 09:43:14.227730 | orchestrator | 09:43:14.227 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=a5c1adc8-154f-424a-b8a2-11a31fe7571c] 2025-06-19 09:43:14.250717 | orchestrator | 09:43:14.250 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-19 09:43:14.263847 | orchestrator | 09:43:14.263 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-19 09:43:14.264003 | orchestrator | 09:43:14.263 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-19 09:43:14.265060 | orchestrator | 09:43:14.264 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-19 09:43:14.272471 | orchestrator | 09:43:14.272 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-19 09:43:14.279512 | orchestrator | 09:43:14.279 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-19 09:43:14.279559 | orchestrator | 09:43:14.279 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-19 09:43:21.040854 | orchestrator | 09:43:21.040 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=dd324707-acc7-4fb6-8e7d-ae94059e213d] 2025-06-19 09:43:21.053000 | orchestrator | 09:43:21.052 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-19 09:43:21.058552 | orchestrator | 09:43:21.058 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-19 09:43:21.058622 | orchestrator | 09:43:21.058 STDOUT terraform: local_file.inventory: Creating... 2025-06-19 09:43:21.064039 | orchestrator | 09:43:21.063 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=eb00246da4dff8c719d3c677019066cea9ef6778] 2025-06-19 09:43:21.065110 | orchestrator | 09:43:21.064 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=e45be6b2acdbfd72420b5fbe47e82ce38f38c8f3] 2025-06-19 09:43:21.898234 | orchestrator | 09:43:21.897 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=dd324707-acc7-4fb6-8e7d-ae94059e213d] 2025-06-19 09:43:24.264798 | orchestrator | 09:43:24.264 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-19 09:43:24.264967 | orchestrator | 09:43:24.264 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-19 09:43:24.265565 | orchestrator | 09:43:24.265 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-19 09:43:24.275072 | orchestrator | 09:43:24.274 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-19 09:43:24.281289 | orchestrator | 09:43:24.281 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-19 09:43:24.282447 | orchestrator | 09:43:24.282 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-19 09:43:34.265698 | orchestrator | 09:43:34.265 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-19 09:43:34.265822 | orchestrator | 09:43:34.265 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-19 09:43:34.266654 | orchestrator | 09:43:34.266 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-19 09:43:34.276147 | orchestrator | 09:43:34.275 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-19 09:43:34.282314 | orchestrator | 09:43:34.282 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-19 09:43:34.283512 | orchestrator | 09:43:34.283 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-19 09:43:34.900804 | orchestrator | 09:43:34.900 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=b47f2ec4-7867-4147-95d5-760402661f83] 2025-06-19 09:43:35.090061 | orchestrator | 09:43:35.089 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=5292e52e-095b-4033-bdb3-be2eb69daf6f] 2025-06-19 09:43:44.265942 | orchestrator | 09:43:44.265 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-06-19 09:43:44.266548 | orchestrator | 09:43:44.266 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-19 09:43:44.283443 | orchestrator | 09:43:44.283 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-19 09:43:44.283801 | orchestrator | 09:43:44.283 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-19 09:43:44.736380 | orchestrator | 09:43:44.736 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=558fd448-f4cf-4164-b191-c4fd6bb82217] 2025-06-19 09:43:45.045410 | orchestrator | 09:43:45.045 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=2431d3b9-3eb3-42a2-be42-676d4d22d26d] 2025-06-19 09:43:54.269459 | orchestrator | 09:43:54.269 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-06-19 09:43:54.284715 | orchestrator | 09:43:54.284 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-06-19 09:43:55.052581 | orchestrator | 09:43:55.052 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=295cc11e-512e-4c04-b74a-d9a6c0b540e9] 2025-06-19 09:43:55.534521 | orchestrator | 09:43:55.534 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 42s [id=fa8195da-24f7-4d03-b7bc-8c0bb67d4069] 2025-06-19 09:43:55.562755 | orchestrator | 09:43:55.562 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-19 09:43:55.568729 | orchestrator | 09:43:55.568 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8194282581236767753] 2025-06-19 09:43:55.571352 | orchestrator | 09:43:55.571 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-19 09:43:55.571664 | orchestrator | 09:43:55.571 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-19 09:43:55.573036 | orchestrator | 09:43:55.572 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-19 09:43:55.576308 | orchestrator | 09:43:55.576 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-19 09:43:55.577229 | orchestrator | 09:43:55.577 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-19 09:43:55.577342 | orchestrator | 09:43:55.577 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-19 09:43:55.588604 | orchestrator | 09:43:55.588 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-19 09:43:55.590194 | orchestrator | 09:43:55.590 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-19 09:43:55.591804 | orchestrator | 09:43:55.591 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-19 09:43:55.602809 | orchestrator | 09:43:55.602 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-19 09:44:00.909594 | orchestrator | 09:44:00.909 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=295cc11e-512e-4c04-b74a-d9a6c0b540e9/38f445f8-bcf4-4b54-8d34-faf3abd36175] 2025-06-19 09:44:00.946581 | orchestrator | 09:44:00.946 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=558fd448-f4cf-4164-b191-c4fd6bb82217/6c4f0114-96df-472d-8cd2-75acad9ce658] 2025-06-19 09:44:00.949457 | orchestrator | 09:44:00.949 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=5292e52e-095b-4033-bdb3-be2eb69daf6f/48d47195-a07b-47d0-b7e6-8f07488663d6] 2025-06-19 09:44:00.959086 | orchestrator | 09:44:00.958 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=295cc11e-512e-4c04-b74a-d9a6c0b540e9/2f17817e-651e-4f9a-8129-c3db8254ad0b] 2025-06-19 09:44:00.977764 | orchestrator | 09:44:00.977 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=558fd448-f4cf-4164-b191-c4fd6bb82217/5cdb3fff-d4f1-405f-abd7-b446ee32738c] 2025-06-19 09:44:00.995575 | orchestrator | 09:44:00.995 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=295cc11e-512e-4c04-b74a-d9a6c0b540e9/6a40ab2f-d460-475a-85e2-5470cb1f2b74] 2025-06-19 09:44:01.022214 | orchestrator | 09:44:01.021 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=558fd448-f4cf-4164-b191-c4fd6bb82217/5fba7027-7a45-483b-8644-e0c0ef304581] 2025-06-19 09:44:01.039631 | orchestrator | 09:44:01.039 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=5292e52e-095b-4033-bdb3-be2eb69daf6f/1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f] 2025-06-19 09:44:01.215434 | orchestrator | 09:44:01.214 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=5292e52e-095b-4033-bdb3-be2eb69daf6f/d7da1435-c5c9-4327-bd6f-1fcfb647c27d] 2025-06-19 09:44:05.607926 | orchestrator | 09:44:05.607 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-19 09:44:15.608624 | orchestrator | 09:44:15.608 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-19 09:44:15.947801 | orchestrator | 09:44:15.947 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=71032d5a-86e6-4ac9-8e6b-e7b574c274d0] 2025-06-19 09:44:15.972939 | orchestrator | 09:44:15.972 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-19 09:44:15.973055 | orchestrator | 09:44:15.972 STDOUT terraform: Outputs: 2025-06-19 09:44:15.973074 | orchestrator | 09:44:15.972 STDOUT terraform: manager_address = 2025-06-19 09:44:15.973086 | orchestrator | 09:44:15.972 STDOUT terraform: private_key = 2025-06-19 09:44:16.054778 | orchestrator | ok: Runtime: 0:01:44.946830 2025-06-19 09:44:16.088006 | 2025-06-19 09:44:16.088201 | TASK [Create infrastructure (stable)] 2025-06-19 09:44:16.626721 | orchestrator | skipping: Conditional result was False 2025-06-19 09:44:16.648181 | 2025-06-19 09:44:16.648403 | TASK [Fetch manager address] 2025-06-19 09:44:17.108172 | orchestrator | ok 2025-06-19 09:44:17.115693 | 2025-06-19 09:44:17.115831 | TASK [Set manager_host address] 2025-06-19 09:44:17.189299 | orchestrator | ok 2025-06-19 09:44:17.201499 | 2025-06-19 09:44:17.201668 | LOOP [Update ansible collections] 2025-06-19 09:44:20.478618 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-19 09:44:20.479104 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-19 09:44:20.479178 | orchestrator | Starting galaxy collection install process 2025-06-19 09:44:20.479223 | orchestrator | Process install dependency map 2025-06-19 09:44:20.479315 | orchestrator | Starting collection install process 2025-06-19 09:44:20.479360 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-06-19 09:44:20.479428 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-06-19 09:44:20.479474 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-19 09:44:20.479555 | orchestrator | ok: Item: commons Runtime: 0:00:02.941759 2025-06-19 09:44:21.960003 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-19 09:44:21.960178 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-19 09:44:21.960233 | orchestrator | Starting galaxy collection install process 2025-06-19 09:44:21.960297 | orchestrator | Process install dependency map 2025-06-19 09:44:21.960338 | orchestrator | Starting collection install process 2025-06-19 09:44:21.960374 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-06-19 09:44:21.960411 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-06-19 09:44:21.960446 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-19 09:44:21.960501 | orchestrator | ok: Item: services Runtime: 0:00:01.231239 2025-06-19 09:44:21.979662 | 2025-06-19 09:44:21.979803 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-19 09:44:32.573617 | orchestrator | ok 2025-06-19 09:44:32.585206 | 2025-06-19 09:44:32.585380 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-19 09:45:32.633102 | orchestrator | ok 2025-06-19 09:45:32.642750 | 2025-06-19 09:45:32.642912 | TASK [Fetch manager ssh hostkey] 2025-06-19 09:45:34.217278 | orchestrator | Output suppressed because no_log was given 2025-06-19 09:45:34.232001 | 2025-06-19 09:45:34.232208 | TASK [Get ssh keypair from terraform environment] 2025-06-19 09:45:34.771923 | orchestrator | ok: Runtime: 0:00:00.009207 2025-06-19 09:45:34.779850 | 2025-06-19 09:45:34.779976 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-19 09:45:34.810679 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-19 09:45:34.818403 | 2025-06-19 09:45:34.818515 | TASK [Run manager part 0] 2025-06-19 09:45:36.368784 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-19 09:45:36.584579 | orchestrator | 2025-06-19 09:45:36.584633 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-19 09:45:36.584642 | orchestrator | 2025-06-19 09:45:36.584657 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-19 09:45:38.221552 | orchestrator | ok: [testbed-manager] 2025-06-19 09:45:38.221654 | orchestrator | 2025-06-19 09:45:38.221703 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-19 09:45:38.221724 | orchestrator | 2025-06-19 09:45:38.221743 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-19 09:45:40.110822 | orchestrator | ok: [testbed-manager] 2025-06-19 09:45:40.110890 | orchestrator | 2025-06-19 09:45:40.110903 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-19 09:45:40.784215 | orchestrator | ok: [testbed-manager] 2025-06-19 09:45:40.784301 | orchestrator | 2025-06-19 09:45:40.784318 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-19 09:45:40.831324 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:45:40.831390 | orchestrator | 2025-06-19 09:45:40.831407 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-19 09:45:40.862445 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:45:40.862488 | orchestrator | 2025-06-19 09:45:40.862495 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-19 09:45:40.888595 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:45:40.888636 | orchestrator | 2025-06-19 09:45:40.888643 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-19 09:45:40.913941 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:45:40.913984 | orchestrator | 2025-06-19 09:45:40.913989 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-19 09:45:40.943290 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:45:40.943340 | orchestrator | 2025-06-19 09:45:40.943350 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-19 09:45:40.979740 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:45:40.979796 | orchestrator | 2025-06-19 09:45:40.979807 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-19 09:45:41.008691 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:45:41.008738 | orchestrator | 2025-06-19 09:45:41.008746 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-19 09:45:41.776460 | orchestrator | changed: [testbed-manager] 2025-06-19 09:45:41.776515 | orchestrator | 2025-06-19 09:45:41.776523 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-19 09:48:48.716653 | orchestrator | changed: [testbed-manager] 2025-06-19 09:48:48.716796 | orchestrator | 2025-06-19 09:48:48.716819 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-19 09:50:07.188045 | orchestrator | changed: [testbed-manager] 2025-06-19 09:50:07.188113 | orchestrator | 2025-06-19 09:50:07.188121 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-19 09:50:28.098257 | orchestrator | changed: [testbed-manager] 2025-06-19 09:50:28.098301 | orchestrator | 2025-06-19 09:50:28.098310 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-19 09:50:35.961803 | orchestrator | changed: [testbed-manager] 2025-06-19 09:50:35.961896 | orchestrator | 2025-06-19 09:50:35.961912 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-19 09:50:36.010218 | orchestrator | ok: [testbed-manager] 2025-06-19 09:50:36.010291 | orchestrator | 2025-06-19 09:50:36.010305 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-19 09:50:36.794637 | orchestrator | ok: [testbed-manager] 2025-06-19 09:50:36.794726 | orchestrator | 2025-06-19 09:50:36.794745 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-19 09:50:37.507815 | orchestrator | changed: [testbed-manager] 2025-06-19 09:50:37.507901 | orchestrator | 2025-06-19 09:50:37.507928 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-19 09:50:43.758209 | orchestrator | changed: [testbed-manager] 2025-06-19 09:50:43.758277 | orchestrator | 2025-06-19 09:50:43.758313 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-19 09:50:49.760064 | orchestrator | changed: [testbed-manager] 2025-06-19 09:50:49.760158 | orchestrator | 2025-06-19 09:50:49.760177 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-19 09:50:52.345027 | orchestrator | changed: [testbed-manager] 2025-06-19 09:50:52.345077 | orchestrator | 2025-06-19 09:50:52.345088 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-19 09:50:54.096555 | orchestrator | changed: [testbed-manager] 2025-06-19 09:50:54.096637 | orchestrator | 2025-06-19 09:50:54.096654 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-19 09:50:55.223854 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-19 09:50:55.224061 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-19 09:50:55.224081 | orchestrator | 2025-06-19 09:50:55.224094 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-19 09:50:55.268345 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-19 09:50:55.268409 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-19 09:50:55.268423 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-19 09:50:55.268435 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-19 09:51:02.785040 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-19 09:51:02.785131 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-19 09:51:02.785151 | orchestrator | 2025-06-19 09:51:02.785162 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-19 09:51:03.395707 | orchestrator | changed: [testbed-manager] 2025-06-19 09:51:03.395784 | orchestrator | 2025-06-19 09:51:03.395800 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-19 09:52:38.371853 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-19 09:52:38.371898 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-19 09:52:38.371907 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-19 09:52:38.371914 | orchestrator | 2025-06-19 09:52:38.371920 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-19 09:52:40.681297 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-19 09:52:40.681386 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-19 09:52:40.681401 | orchestrator | 2025-06-19 09:52:40.681413 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-19 09:52:40.681425 | orchestrator | 2025-06-19 09:52:40.681436 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-19 09:52:42.044682 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:42.044789 | orchestrator | 2025-06-19 09:52:42.044808 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-19 09:52:42.094604 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:42.094777 | orchestrator | 2025-06-19 09:52:42.094812 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-19 09:52:42.169564 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:42.169615 | orchestrator | 2025-06-19 09:52:42.169622 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-19 09:52:42.946092 | orchestrator | changed: [testbed-manager] 2025-06-19 09:52:42.946169 | orchestrator | 2025-06-19 09:52:42.946184 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-19 09:52:43.674697 | orchestrator | changed: [testbed-manager] 2025-06-19 09:52:43.675513 | orchestrator | 2025-06-19 09:52:43.675535 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-19 09:52:45.038177 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-19 09:52:45.038243 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-19 09:52:45.038258 | orchestrator | 2025-06-19 09:52:45.038284 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-19 09:52:46.506621 | orchestrator | changed: [testbed-manager] 2025-06-19 09:52:46.506695 | orchestrator | 2025-06-19 09:52:46.506729 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-19 09:52:48.236392 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-19 09:52:48.236534 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-19 09:52:48.236554 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-19 09:52:48.236567 | orchestrator | 2025-06-19 09:52:48.236579 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-19 09:52:48.817323 | orchestrator | changed: [testbed-manager] 2025-06-19 09:52:48.817969 | orchestrator | 2025-06-19 09:52:48.817991 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-19 09:52:48.886069 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:52:48.886116 | orchestrator | 2025-06-19 09:52:48.886122 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-19 09:52:49.781287 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-19 09:52:49.781385 | orchestrator | changed: [testbed-manager] 2025-06-19 09:52:49.781403 | orchestrator | 2025-06-19 09:52:49.781416 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-19 09:52:49.819776 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:52:49.819842 | orchestrator | 2025-06-19 09:52:49.819852 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-19 09:52:49.854607 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:52:49.854665 | orchestrator | 2025-06-19 09:52:49.854674 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-19 09:52:49.887028 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:52:49.887088 | orchestrator | 2025-06-19 09:52:49.887098 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-19 09:52:49.936916 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:52:49.936966 | orchestrator | 2025-06-19 09:52:49.936972 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-19 09:52:50.651935 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:50.651996 | orchestrator | 2025-06-19 09:52:50.652011 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-19 09:52:50.652024 | orchestrator | 2025-06-19 09:52:50.652037 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-19 09:52:51.998062 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:51.998122 | orchestrator | 2025-06-19 09:52:51.998138 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-19 09:52:52.950996 | orchestrator | changed: [testbed-manager] 2025-06-19 09:52:52.951029 | orchestrator | 2025-06-19 09:52:52.951035 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 09:52:52.951041 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-19 09:52:52.951045 | orchestrator | 2025-06-19 09:52:53.162890 | orchestrator | ok: Runtime: 0:07:17.912893 2025-06-19 09:52:53.182680 | 2025-06-19 09:52:53.182818 | TASK [Point out that the log in on the manager is now possible] 2025-06-19 09:52:53.233699 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-19 09:52:53.244874 | 2025-06-19 09:52:53.245036 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-19 09:52:53.283497 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-19 09:52:53.292935 | 2025-06-19 09:52:53.293083 | TASK [Run manager part 1 + 2] 2025-06-19 09:52:54.247466 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-19 09:52:54.338117 | orchestrator | 2025-06-19 09:52:54.338189 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-19 09:52:54.338203 | orchestrator | 2025-06-19 09:52:54.338224 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-19 09:52:57.327725 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:57.327849 | orchestrator | 2025-06-19 09:52:57.327903 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-19 09:52:57.364894 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:52:57.364976 | orchestrator | 2025-06-19 09:52:57.364998 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-19 09:52:57.399818 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:57.399876 | orchestrator | 2025-06-19 09:52:57.399889 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-19 09:52:57.426759 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:57.426808 | orchestrator | 2025-06-19 09:52:57.426816 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-19 09:52:57.491894 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:57.491948 | orchestrator | 2025-06-19 09:52:57.491958 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-19 09:52:57.565406 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:57.565462 | orchestrator | 2025-06-19 09:52:57.565469 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-19 09:52:57.617338 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-19 09:52:57.617384 | orchestrator | 2025-06-19 09:52:57.617390 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-19 09:52:58.324662 | orchestrator | ok: [testbed-manager] 2025-06-19 09:52:58.324704 | orchestrator | 2025-06-19 09:52:58.324711 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-19 09:52:58.372916 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:52:58.372975 | orchestrator | 2025-06-19 09:52:58.372984 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-19 09:52:59.748621 | orchestrator | changed: [testbed-manager] 2025-06-19 09:52:59.748817 | orchestrator | 2025-06-19 09:52:59.748841 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-19 09:53:00.332128 | orchestrator | ok: [testbed-manager] 2025-06-19 09:53:00.332231 | orchestrator | 2025-06-19 09:53:00.332249 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-19 09:53:01.468631 | orchestrator | changed: [testbed-manager] 2025-06-19 09:53:01.468680 | orchestrator | 2025-06-19 09:53:01.468689 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-19 09:53:14.023835 | orchestrator | changed: [testbed-manager] 2025-06-19 09:53:14.023893 | orchestrator | 2025-06-19 09:53:14.023905 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-19 09:53:14.721336 | orchestrator | ok: [testbed-manager] 2025-06-19 09:53:14.721413 | orchestrator | 2025-06-19 09:53:14.721429 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-19 09:53:14.775898 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:53:14.775954 | orchestrator | 2025-06-19 09:53:14.775962 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-19 09:53:15.736232 | orchestrator | changed: [testbed-manager] 2025-06-19 09:53:15.736280 | orchestrator | 2025-06-19 09:53:15.736291 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-19 09:53:16.732753 | orchestrator | changed: [testbed-manager] 2025-06-19 09:53:16.732875 | orchestrator | 2025-06-19 09:53:16.732886 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-19 09:53:17.321188 | orchestrator | changed: [testbed-manager] 2025-06-19 09:53:17.321287 | orchestrator | 2025-06-19 09:53:17.321303 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-19 09:53:17.360935 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-19 09:53:17.361023 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-19 09:53:17.361038 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-19 09:53:17.361050 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-19 09:53:20.322335 | orchestrator | changed: [testbed-manager] 2025-06-19 09:53:20.322420 | orchestrator | 2025-06-19 09:53:20.322436 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-19 09:53:28.735240 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-19 09:53:28.735301 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-19 09:53:28.735315 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-19 09:53:28.735326 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-19 09:53:28.735341 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-19 09:53:28.735351 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-19 09:53:28.735361 | orchestrator | 2025-06-19 09:53:28.735372 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-19 09:53:29.753674 | orchestrator | changed: [testbed-manager] 2025-06-19 09:53:29.753761 | orchestrator | 2025-06-19 09:53:29.753784 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-19 09:53:29.795569 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:53:29.795617 | orchestrator | 2025-06-19 09:53:29.795625 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-19 09:53:32.796202 | orchestrator | changed: [testbed-manager] 2025-06-19 09:53:32.796294 | orchestrator | 2025-06-19 09:53:32.796311 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-19 09:53:32.841380 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:53:32.841461 | orchestrator | 2025-06-19 09:53:32.841477 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-19 09:55:06.151271 | orchestrator | changed: [testbed-manager] 2025-06-19 09:55:06.151459 | orchestrator | 2025-06-19 09:55:06.151482 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-19 09:55:07.276441 | orchestrator | ok: [testbed-manager] 2025-06-19 09:55:07.276512 | orchestrator | 2025-06-19 09:55:07.276529 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 09:55:07.276543 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-19 09:55:07.276555 | orchestrator | 2025-06-19 09:55:07.404421 | orchestrator | ok: Runtime: 0:02:13.719062 2025-06-19 09:55:07.412525 | 2025-06-19 09:55:07.412618 | TASK [Reboot manager] 2025-06-19 09:55:08.945374 | orchestrator | ok: Runtime: 0:00:00.997950 2025-06-19 09:55:08.952649 | 2025-06-19 09:55:08.952735 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-19 09:55:23.281950 | orchestrator | ok 2025-06-19 09:55:23.292308 | 2025-06-19 09:55:23.292440 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-19 09:56:23.327647 | orchestrator | ok 2025-06-19 09:56:23.339192 | 2025-06-19 09:56:23.339361 | TASK [Deploy manager + bootstrap nodes] 2025-06-19 09:56:25.805037 | orchestrator | 2025-06-19 09:56:25.805354 | orchestrator | # DEPLOY MANAGER 2025-06-19 09:56:25.805385 | orchestrator | 2025-06-19 09:56:25.805400 | orchestrator | + set -e 2025-06-19 09:56:25.805414 | orchestrator | + echo 2025-06-19 09:56:25.805429 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-19 09:56:25.805447 | orchestrator | + echo 2025-06-19 09:56:25.805499 | orchestrator | + cat /opt/manager-vars.sh 2025-06-19 09:56:25.808655 | orchestrator | export NUMBER_OF_NODES=6 2025-06-19 09:56:25.808751 | orchestrator | 2025-06-19 09:56:25.808767 | orchestrator | export CEPH_VERSION=reef 2025-06-19 09:56:25.808782 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-19 09:56:25.808794 | orchestrator | export MANAGER_VERSION=latest 2025-06-19 09:56:25.808824 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-19 09:56:25.808835 | orchestrator | 2025-06-19 09:56:25.808853 | orchestrator | export ARA=false 2025-06-19 09:56:25.808865 | orchestrator | export DEPLOY_MODE=manager 2025-06-19 09:56:25.808883 | orchestrator | export TEMPEST=false 2025-06-19 09:56:25.808895 | orchestrator | export IS_ZUUL=true 2025-06-19 09:56:25.808906 | orchestrator | 2025-06-19 09:56:25.808925 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 09:56:25.808937 | orchestrator | export EXTERNAL_API=false 2025-06-19 09:56:25.808948 | orchestrator | 2025-06-19 09:56:25.808959 | orchestrator | export IMAGE_USER=ubuntu 2025-06-19 09:56:25.808973 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-19 09:56:25.808984 | orchestrator | 2025-06-19 09:56:25.808995 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-19 09:56:25.809017 | orchestrator | 2025-06-19 09:56:25.809028 | orchestrator | + echo 2025-06-19 09:56:25.809045 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-19 09:56:25.809739 | orchestrator | ++ export INTERACTIVE=false 2025-06-19 09:56:25.809759 | orchestrator | ++ INTERACTIVE=false 2025-06-19 09:56:25.809772 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-19 09:56:25.809785 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-19 09:56:25.809814 | orchestrator | + source /opt/manager-vars.sh 2025-06-19 09:56:25.809827 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-19 09:56:25.809839 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-19 09:56:25.809851 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-19 09:56:25.809863 | orchestrator | ++ CEPH_VERSION=reef 2025-06-19 09:56:25.809880 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-19 09:56:25.809893 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-19 09:56:25.809906 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-19 09:56:25.809918 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-19 09:56:25.809934 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-19 09:56:25.809957 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-19 09:56:25.809969 | orchestrator | ++ export ARA=false 2025-06-19 09:56:25.809982 | orchestrator | ++ ARA=false 2025-06-19 09:56:25.809995 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-19 09:56:25.810007 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-19 09:56:25.810049 | orchestrator | ++ export TEMPEST=false 2025-06-19 09:56:25.810062 | orchestrator | ++ TEMPEST=false 2025-06-19 09:56:25.810079 | orchestrator | ++ export IS_ZUUL=true 2025-06-19 09:56:25.810090 | orchestrator | ++ IS_ZUUL=true 2025-06-19 09:56:25.810101 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 09:56:25.810145 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 09:56:25.810164 | orchestrator | ++ export EXTERNAL_API=false 2025-06-19 09:56:25.810180 | orchestrator | ++ EXTERNAL_API=false 2025-06-19 09:56:25.810199 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-19 09:56:25.810218 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-19 09:56:25.810237 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-19 09:56:25.810254 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-19 09:56:25.810266 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-19 09:56:25.810277 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-19 09:56:25.810293 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-19 09:56:25.863067 | orchestrator | + docker version 2025-06-19 09:56:26.120285 | orchestrator | Client: Docker Engine - Community 2025-06-19 09:56:26.120392 | orchestrator | Version: 27.5.1 2025-06-19 09:56:26.120411 | orchestrator | API version: 1.47 2025-06-19 09:56:26.120423 | orchestrator | Go version: go1.22.11 2025-06-19 09:56:26.120435 | orchestrator | Git commit: 9f9e405 2025-06-19 09:56:26.120446 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-19 09:56:26.120459 | orchestrator | OS/Arch: linux/amd64 2025-06-19 09:56:26.120471 | orchestrator | Context: default 2025-06-19 09:56:26.120482 | orchestrator | 2025-06-19 09:56:26.120494 | orchestrator | Server: Docker Engine - Community 2025-06-19 09:56:26.120506 | orchestrator | Engine: 2025-06-19 09:56:26.120518 | orchestrator | Version: 27.5.1 2025-06-19 09:56:26.120529 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-19 09:56:26.120572 | orchestrator | Go version: go1.22.11 2025-06-19 09:56:26.120586 | orchestrator | Git commit: 4c9b3b0 2025-06-19 09:56:26.120598 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-19 09:56:26.120609 | orchestrator | OS/Arch: linux/amd64 2025-06-19 09:56:26.120620 | orchestrator | Experimental: false 2025-06-19 09:56:26.120631 | orchestrator | containerd: 2025-06-19 09:56:26.120643 | orchestrator | Version: 1.7.27 2025-06-19 09:56:26.120654 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-19 09:56:26.120665 | orchestrator | runc: 2025-06-19 09:56:26.120677 | orchestrator | Version: 1.2.5 2025-06-19 09:56:26.120688 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-19 09:56:26.120700 | orchestrator | docker-init: 2025-06-19 09:56:26.120712 | orchestrator | Version: 0.19.0 2025-06-19 09:56:26.120725 | orchestrator | GitCommit: de40ad0 2025-06-19 09:56:26.123333 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-19 09:56:26.132746 | orchestrator | + set -e 2025-06-19 09:56:26.132793 | orchestrator | + source /opt/manager-vars.sh 2025-06-19 09:56:26.132804 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-19 09:56:26.132811 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-19 09:56:26.132818 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-19 09:56:26.132825 | orchestrator | ++ CEPH_VERSION=reef 2025-06-19 09:56:26.132832 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-19 09:56:26.132840 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-19 09:56:26.132847 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-19 09:56:26.132854 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-19 09:56:26.132861 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-19 09:56:26.132867 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-19 09:56:26.132875 | orchestrator | ++ export ARA=false 2025-06-19 09:56:26.132881 | orchestrator | ++ ARA=false 2025-06-19 09:56:26.132888 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-19 09:56:26.132895 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-19 09:56:26.132901 | orchestrator | ++ export TEMPEST=false 2025-06-19 09:56:26.132909 | orchestrator | ++ TEMPEST=false 2025-06-19 09:56:26.132916 | orchestrator | ++ export IS_ZUUL=true 2025-06-19 09:56:26.132922 | orchestrator | ++ IS_ZUUL=true 2025-06-19 09:56:26.132929 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 09:56:26.132944 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 09:56:26.132951 | orchestrator | ++ export EXTERNAL_API=false 2025-06-19 09:56:26.132957 | orchestrator | ++ EXTERNAL_API=false 2025-06-19 09:56:26.132965 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-19 09:56:26.132971 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-19 09:56:26.132979 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-19 09:56:26.132985 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-19 09:56:26.132992 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-19 09:56:26.132998 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-19 09:56:26.133005 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-19 09:56:26.133245 | orchestrator | ++ export INTERACTIVE=false 2025-06-19 09:56:26.133325 | orchestrator | ++ INTERACTIVE=false 2025-06-19 09:56:26.133339 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-19 09:56:26.133355 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-19 09:56:26.133377 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-19 09:56:26.133388 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-19 09:56:26.133400 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-06-19 09:56:26.139444 | orchestrator | + set -e 2025-06-19 09:56:26.139482 | orchestrator | + VERSION=reef 2025-06-19 09:56:26.140478 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-19 09:56:26.145981 | orchestrator | + [[ -n ceph_version: reef ]] 2025-06-19 09:56:26.146012 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-06-19 09:56:26.151710 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-06-19 09:56:26.157878 | orchestrator | + set -e 2025-06-19 09:56:26.157918 | orchestrator | + VERSION=2024.2 2025-06-19 09:56:26.159008 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-19 09:56:26.162384 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-06-19 09:56:26.162425 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-06-19 09:56:26.167853 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-19 09:56:26.168591 | orchestrator | ++ semver latest 7.0.0 2025-06-19 09:56:26.229956 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-19 09:56:26.230088 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-19 09:56:26.230105 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-19 09:56:26.230188 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-19 09:56:26.319969 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-19 09:56:26.322683 | orchestrator | + source /opt/venv/bin/activate 2025-06-19 09:56:26.323992 | orchestrator | ++ deactivate nondestructive 2025-06-19 09:56:26.324017 | orchestrator | ++ '[' -n '' ']' 2025-06-19 09:56:26.324030 | orchestrator | ++ '[' -n '' ']' 2025-06-19 09:56:26.324041 | orchestrator | ++ hash -r 2025-06-19 09:56:26.324053 | orchestrator | ++ '[' -n '' ']' 2025-06-19 09:56:26.324064 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-19 09:56:26.324080 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-19 09:56:26.324091 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-19 09:56:26.324353 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-19 09:56:26.324376 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-19 09:56:26.324388 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-19 09:56:26.324400 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-19 09:56:26.324523 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-19 09:56:26.324540 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-19 09:56:26.324552 | orchestrator | ++ export PATH 2025-06-19 09:56:26.324664 | orchestrator | ++ '[' -n '' ']' 2025-06-19 09:56:26.324683 | orchestrator | ++ '[' -z '' ']' 2025-06-19 09:56:26.324748 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-19 09:56:26.324762 | orchestrator | ++ PS1='(venv) ' 2025-06-19 09:56:26.324773 | orchestrator | ++ export PS1 2025-06-19 09:56:26.324784 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-19 09:56:26.324795 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-19 09:56:26.324841 | orchestrator | ++ hash -r 2025-06-19 09:56:26.325034 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-19 09:56:27.667994 | orchestrator | 2025-06-19 09:56:27.668105 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-19 09:56:27.668175 | orchestrator | 2025-06-19 09:56:27.668188 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-19 09:56:28.216533 | orchestrator | ok: [testbed-manager] 2025-06-19 09:56:28.216652 | orchestrator | 2025-06-19 09:56:28.216670 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-19 09:56:29.186504 | orchestrator | changed: [testbed-manager] 2025-06-19 09:56:29.186644 | orchestrator | 2025-06-19 09:56:29.186662 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-19 09:56:29.186675 | orchestrator | 2025-06-19 09:56:29.186686 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-19 09:56:31.544875 | orchestrator | ok: [testbed-manager] 2025-06-19 09:56:31.544998 | orchestrator | 2025-06-19 09:56:31.545016 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-19 09:56:31.595634 | orchestrator | ok: [testbed-manager] 2025-06-19 09:56:31.595713 | orchestrator | 2025-06-19 09:56:31.595731 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-19 09:56:32.051559 | orchestrator | changed: [testbed-manager] 2025-06-19 09:56:32.051665 | orchestrator | 2025-06-19 09:56:32.051681 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-19 09:56:32.095395 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:56:32.095454 | orchestrator | 2025-06-19 09:56:32.095468 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-19 09:56:32.447629 | orchestrator | changed: [testbed-manager] 2025-06-19 09:56:32.447728 | orchestrator | 2025-06-19 09:56:32.447744 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-19 09:56:32.511101 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:56:32.511221 | orchestrator | 2025-06-19 09:56:32.511237 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-19 09:56:32.846648 | orchestrator | ok: [testbed-manager] 2025-06-19 09:56:32.846756 | orchestrator | 2025-06-19 09:56:32.846775 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-19 09:56:32.965004 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:56:32.965099 | orchestrator | 2025-06-19 09:56:32.965113 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-19 09:56:32.965164 | orchestrator | 2025-06-19 09:56:32.965178 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-19 09:56:34.780559 | orchestrator | ok: [testbed-manager] 2025-06-19 09:56:34.780651 | orchestrator | 2025-06-19 09:56:34.780664 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-19 09:56:34.867268 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-19 09:56:34.867379 | orchestrator | 2025-06-19 09:56:34.867400 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-19 09:56:34.922886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-19 09:56:34.922968 | orchestrator | 2025-06-19 09:56:34.922982 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-19 09:56:35.988727 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-19 09:56:35.988834 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-19 09:56:35.988850 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-19 09:56:35.988862 | orchestrator | 2025-06-19 09:56:35.988875 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-19 09:56:37.789363 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-19 09:56:37.789485 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-19 09:56:37.789506 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-19 09:56:37.789518 | orchestrator | 2025-06-19 09:56:37.789531 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-19 09:56:38.409470 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-19 09:56:38.409592 | orchestrator | changed: [testbed-manager] 2025-06-19 09:56:38.409609 | orchestrator | 2025-06-19 09:56:38.409623 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-19 09:56:39.041303 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-19 09:56:39.041429 | orchestrator | changed: [testbed-manager] 2025-06-19 09:56:39.041452 | orchestrator | 2025-06-19 09:56:39.041471 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-19 09:56:39.097507 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:56:39.097606 | orchestrator | 2025-06-19 09:56:39.097620 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-19 09:56:39.466871 | orchestrator | ok: [testbed-manager] 2025-06-19 09:56:39.466972 | orchestrator | 2025-06-19 09:56:39.466987 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-19 09:56:39.546470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-19 09:56:39.546563 | orchestrator | 2025-06-19 09:56:39.546577 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-19 09:56:40.638241 | orchestrator | changed: [testbed-manager] 2025-06-19 09:56:40.638318 | orchestrator | 2025-06-19 09:56:40.638335 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-19 09:56:41.431061 | orchestrator | changed: [testbed-manager] 2025-06-19 09:56:41.431211 | orchestrator | 2025-06-19 09:56:41.431229 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-19 09:56:53.739371 | orchestrator | changed: [testbed-manager] 2025-06-19 09:56:53.739492 | orchestrator | 2025-06-19 09:56:53.739510 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-19 09:56:53.782917 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:56:53.782989 | orchestrator | 2025-06-19 09:56:53.782999 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-19 09:56:53.783006 | orchestrator | 2025-06-19 09:56:53.783013 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-19 09:56:55.498367 | orchestrator | ok: [testbed-manager] 2025-06-19 09:56:55.498472 | orchestrator | 2025-06-19 09:56:55.498513 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-19 09:56:55.593101 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-19 09:56:55.593245 | orchestrator | 2025-06-19 09:56:55.593272 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-19 09:56:55.648854 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-19 09:56:55.648949 | orchestrator | 2025-06-19 09:56:55.648964 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-19 09:56:58.149274 | orchestrator | ok: [testbed-manager] 2025-06-19 09:56:58.149383 | orchestrator | 2025-06-19 09:56:58.149399 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-19 09:56:58.205230 | orchestrator | ok: [testbed-manager] 2025-06-19 09:56:58.205328 | orchestrator | 2025-06-19 09:56:58.205346 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-19 09:56:58.335534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-19 09:56:58.335617 | orchestrator | 2025-06-19 09:56:58.335630 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-19 09:57:01.134719 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-19 09:57:01.134829 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-19 09:57:01.134845 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-19 09:57:01.134857 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-19 09:57:01.134868 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-19 09:57:01.134880 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-19 09:57:01.134891 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-19 09:57:01.134903 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-19 09:57:01.134914 | orchestrator | 2025-06-19 09:57:01.134927 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-19 09:57:01.776568 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:01.776674 | orchestrator | 2025-06-19 09:57:01.776691 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-19 09:57:02.448491 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:02.448603 | orchestrator | 2025-06-19 09:57:02.448621 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-19 09:57:02.525100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-19 09:57:02.525244 | orchestrator | 2025-06-19 09:57:02.525259 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-19 09:57:03.747909 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-19 09:57:03.748015 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-19 09:57:03.748031 | orchestrator | 2025-06-19 09:57:03.748044 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-19 09:57:04.370533 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:04.370635 | orchestrator | 2025-06-19 09:57:04.370650 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-19 09:57:04.438897 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:57:04.438984 | orchestrator | 2025-06-19 09:57:04.438999 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-19 09:57:04.527516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-19 09:57:04.527668 | orchestrator | 2025-06-19 09:57:04.527697 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-19 09:57:05.959419 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-19 09:57:05.959524 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-19 09:57:05.959539 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:05.959553 | orchestrator | 2025-06-19 09:57:05.959566 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-19 09:57:06.591026 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:06.591190 | orchestrator | 2025-06-19 09:57:06.591212 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-19 09:57:06.651583 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:57:06.651674 | orchestrator | 2025-06-19 09:57:06.651690 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-19 09:57:06.743895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-19 09:57:06.743986 | orchestrator | 2025-06-19 09:57:06.744001 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-19 09:57:07.252932 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:07.253039 | orchestrator | 2025-06-19 09:57:07.253055 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-19 09:57:07.667607 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:07.667705 | orchestrator | 2025-06-19 09:57:07.667722 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-19 09:57:08.880349 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-19 09:57:08.880454 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-19 09:57:08.880469 | orchestrator | 2025-06-19 09:57:08.880482 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-19 09:57:09.492308 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:09.492412 | orchestrator | 2025-06-19 09:57:09.492429 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-19 09:57:09.891875 | orchestrator | ok: [testbed-manager] 2025-06-19 09:57:09.891978 | orchestrator | 2025-06-19 09:57:09.891994 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-19 09:57:10.267757 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:10.267854 | orchestrator | 2025-06-19 09:57:10.267870 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-19 09:57:10.314754 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:57:10.314804 | orchestrator | 2025-06-19 09:57:10.314817 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-19 09:57:10.384002 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-19 09:57:10.384103 | orchestrator | 2025-06-19 09:57:10.384117 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-19 09:57:10.432288 | orchestrator | ok: [testbed-manager] 2025-06-19 09:57:10.432356 | orchestrator | 2025-06-19 09:57:10.432371 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-19 09:57:12.471583 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-19 09:57:12.471707 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-19 09:57:12.471724 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-19 09:57:12.471736 | orchestrator | 2025-06-19 09:57:12.471749 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-19 09:57:13.178622 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:13.178724 | orchestrator | 2025-06-19 09:57:13.178741 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-19 09:57:13.879400 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:13.879508 | orchestrator | 2025-06-19 09:57:13.879526 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-19 09:57:14.592700 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:14.592809 | orchestrator | 2025-06-19 09:57:14.592827 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-19 09:57:14.676421 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-19 09:57:14.676527 | orchestrator | 2025-06-19 09:57:14.676544 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-19 09:57:14.716791 | orchestrator | ok: [testbed-manager] 2025-06-19 09:57:14.716909 | orchestrator | 2025-06-19 09:57:14.716936 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-19 09:57:15.449896 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-19 09:57:15.449994 | orchestrator | 2025-06-19 09:57:15.450011 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-19 09:57:15.536135 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-19 09:57:15.536254 | orchestrator | 2025-06-19 09:57:15.536268 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-19 09:57:16.233970 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:16.234125 | orchestrator | 2025-06-19 09:57:16.234144 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-19 09:57:16.844933 | orchestrator | ok: [testbed-manager] 2025-06-19 09:57:16.845034 | orchestrator | 2025-06-19 09:57:16.845050 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-19 09:57:16.901384 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:57:16.901518 | orchestrator | 2025-06-19 09:57:16.901535 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-19 09:57:16.954266 | orchestrator | ok: [testbed-manager] 2025-06-19 09:57:16.954338 | orchestrator | 2025-06-19 09:57:16.954351 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-19 09:57:17.778211 | orchestrator | changed: [testbed-manager] 2025-06-19 09:57:17.778323 | orchestrator | 2025-06-19 09:57:17.778339 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-19 09:58:25.171309 | orchestrator | changed: [testbed-manager] 2025-06-19 09:58:25.171453 | orchestrator | 2025-06-19 09:58:25.171483 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-19 09:58:26.203022 | orchestrator | ok: [testbed-manager] 2025-06-19 09:58:26.203125 | orchestrator | 2025-06-19 09:58:26.203141 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-19 09:58:26.250091 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:58:26.250168 | orchestrator | 2025-06-19 09:58:26.250182 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-19 09:58:28.899215 | orchestrator | changed: [testbed-manager] 2025-06-19 09:58:28.899363 | orchestrator | 2025-06-19 09:58:28.899382 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-19 09:58:28.956052 | orchestrator | ok: [testbed-manager] 2025-06-19 09:58:28.956138 | orchestrator | 2025-06-19 09:58:28.956152 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-19 09:58:28.956164 | orchestrator | 2025-06-19 09:58:28.956175 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-19 09:58:29.000381 | orchestrator | skipping: [testbed-manager] 2025-06-19 09:58:29.000465 | orchestrator | 2025-06-19 09:58:29.000478 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-19 09:59:29.044830 | orchestrator | Pausing for 60 seconds 2025-06-19 09:59:29.044949 | orchestrator | changed: [testbed-manager] 2025-06-19 09:59:29.044967 | orchestrator | 2025-06-19 09:59:29.044980 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-19 09:59:34.049431 | orchestrator | changed: [testbed-manager] 2025-06-19 09:59:34.049539 | orchestrator | 2025-06-19 09:59:34.049552 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-19 10:00:15.677391 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-19 10:00:15.677496 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-19 10:00:15.677512 | orchestrator | changed: [testbed-manager] 2025-06-19 10:00:15.677526 | orchestrator | 2025-06-19 10:00:15.677537 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-19 10:00:24.049274 | orchestrator | changed: [testbed-manager] 2025-06-19 10:00:24.049441 | orchestrator | 2025-06-19 10:00:24.049462 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-19 10:00:24.138454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-19 10:00:24.138576 | orchestrator | 2025-06-19 10:00:24.138591 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-19 10:00:24.138603 | orchestrator | 2025-06-19 10:00:24.138614 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-19 10:00:24.192433 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:00:24.192517 | orchestrator | 2025-06-19 10:00:24.192531 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:00:24.192544 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-19 10:00:24.192556 | orchestrator | 2025-06-19 10:00:24.300668 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-19 10:00:24.300757 | orchestrator | + deactivate 2025-06-19 10:00:24.300772 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-19 10:00:24.300786 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-19 10:00:24.300797 | orchestrator | + export PATH 2025-06-19 10:00:24.300808 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-19 10:00:24.300820 | orchestrator | + '[' -n '' ']' 2025-06-19 10:00:24.300831 | orchestrator | + hash -r 2025-06-19 10:00:24.300842 | orchestrator | + '[' -n '' ']' 2025-06-19 10:00:24.300853 | orchestrator | + unset VIRTUAL_ENV 2025-06-19 10:00:24.300864 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-19 10:00:24.300897 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-19 10:00:24.300909 | orchestrator | + unset -f deactivate 2025-06-19 10:00:24.300921 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-19 10:00:24.308192 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-19 10:00:24.308229 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-19 10:00:24.308242 | orchestrator | + local max_attempts=60 2025-06-19 10:00:24.308253 | orchestrator | + local name=ceph-ansible 2025-06-19 10:00:24.308264 | orchestrator | + local attempt_num=1 2025-06-19 10:00:24.308729 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:00:24.346846 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:00:24.346913 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-19 10:00:24.346934 | orchestrator | + local max_attempts=60 2025-06-19 10:00:24.346954 | orchestrator | + local name=kolla-ansible 2025-06-19 10:00:24.346972 | orchestrator | + local attempt_num=1 2025-06-19 10:00:24.346991 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-19 10:00:24.380522 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:00:24.380571 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-19 10:00:24.380584 | orchestrator | + local max_attempts=60 2025-06-19 10:00:24.380595 | orchestrator | + local name=osism-ansible 2025-06-19 10:00:24.380606 | orchestrator | + local attempt_num=1 2025-06-19 10:00:24.380886 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-19 10:00:24.411349 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:00:24.411437 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-19 10:00:24.411450 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-19 10:00:25.094429 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-19 10:00:25.273611 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-19 10:00:25.273712 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-19 10:00:25.273727 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-19 10:00:25.273740 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-19 10:00:25.273753 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-19 10:00:25.273798 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-19 10:00:25.273810 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-19 10:00:25.273821 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-06-19 10:00:25.273832 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-19 10:00:25.273843 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-19 10:00:25.273854 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-19 10:00:25.273864 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-19 10:00:25.273875 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-19 10:00:25.273886 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-19 10:00:25.273897 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-19 10:00:25.282214 | orchestrator | ++ semver latest 7.0.0 2025-06-19 10:00:25.327726 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-19 10:00:25.327800 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-19 10:00:25.327815 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-19 10:00:25.331108 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-19 10:00:27.025497 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:00:27.025599 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:00:27.025615 | orchestrator | Registering Redlock._release_script 2025-06-19 10:00:27.222754 | orchestrator | 2025-06-19 10:00:27 | INFO  | Task c817b0e5-6274-4c91-961b-aebd153f0e29 (resolvconf) was prepared for execution. 2025-06-19 10:00:27.222842 | orchestrator | 2025-06-19 10:00:27 | INFO  | It takes a moment until task c817b0e5-6274-4c91-961b-aebd153f0e29 (resolvconf) has been started and output is visible here. 2025-06-19 10:00:40.502074 | orchestrator | 2025-06-19 10:00:40.502218 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-19 10:00:40.502236 | orchestrator | 2025-06-19 10:00:40.502247 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-19 10:00:40.502259 | orchestrator | Thursday 19 June 2025 10:00:31 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-06-19 10:00:40.502270 | orchestrator | ok: [testbed-manager] 2025-06-19 10:00:40.502282 | orchestrator | 2025-06-19 10:00:40.502293 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-19 10:00:40.502309 | orchestrator | Thursday 19 June 2025 10:00:34 +0000 (0:00:03.622) 0:00:03.774 ********* 2025-06-19 10:00:40.502321 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:00:40.502358 | orchestrator | 2025-06-19 10:00:40.502370 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-19 10:00:40.502429 | orchestrator | Thursday 19 June 2025 10:00:34 +0000 (0:00:00.064) 0:00:03.838 ********* 2025-06-19 10:00:40.502444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-19 10:00:40.502456 | orchestrator | 2025-06-19 10:00:40.502467 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-19 10:00:40.502477 | orchestrator | Thursday 19 June 2025 10:00:34 +0000 (0:00:00.089) 0:00:03.928 ********* 2025-06-19 10:00:40.502488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-19 10:00:40.502500 | orchestrator | 2025-06-19 10:00:40.502510 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-19 10:00:40.502521 | orchestrator | Thursday 19 June 2025 10:00:34 +0000 (0:00:00.086) 0:00:04.014 ********* 2025-06-19 10:00:40.502532 | orchestrator | ok: [testbed-manager] 2025-06-19 10:00:40.502542 | orchestrator | 2025-06-19 10:00:40.502555 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-19 10:00:40.502567 | orchestrator | Thursday 19 June 2025 10:00:35 +0000 (0:00:01.004) 0:00:05.019 ********* 2025-06-19 10:00:40.502580 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:00:40.502592 | orchestrator | 2025-06-19 10:00:40.502604 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-19 10:00:40.502616 | orchestrator | Thursday 19 June 2025 10:00:36 +0000 (0:00:00.058) 0:00:05.077 ********* 2025-06-19 10:00:40.502629 | orchestrator | ok: [testbed-manager] 2025-06-19 10:00:40.502641 | orchestrator | 2025-06-19 10:00:40.502653 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-19 10:00:40.502665 | orchestrator | Thursday 19 June 2025 10:00:36 +0000 (0:00:00.466) 0:00:05.544 ********* 2025-06-19 10:00:40.502677 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:00:40.502689 | orchestrator | 2025-06-19 10:00:40.502702 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-19 10:00:40.502716 | orchestrator | Thursday 19 June 2025 10:00:36 +0000 (0:00:00.090) 0:00:05.635 ********* 2025-06-19 10:00:40.502728 | orchestrator | changed: [testbed-manager] 2025-06-19 10:00:40.502740 | orchestrator | 2025-06-19 10:00:40.502818 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-19 10:00:40.502831 | orchestrator | Thursday 19 June 2025 10:00:37 +0000 (0:00:00.510) 0:00:06.146 ********* 2025-06-19 10:00:40.502843 | orchestrator | changed: [testbed-manager] 2025-06-19 10:00:40.502855 | orchestrator | 2025-06-19 10:00:40.502867 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-19 10:00:40.502879 | orchestrator | Thursday 19 June 2025 10:00:38 +0000 (0:00:01.076) 0:00:07.222 ********* 2025-06-19 10:00:40.502892 | orchestrator | ok: [testbed-manager] 2025-06-19 10:00:40.502905 | orchestrator | 2025-06-19 10:00:40.502916 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-19 10:00:40.502937 | orchestrator | Thursday 19 June 2025 10:00:39 +0000 (0:00:01.004) 0:00:08.226 ********* 2025-06-19 10:00:40.502949 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-19 10:00:40.502960 | orchestrator | 2025-06-19 10:00:40.502971 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-19 10:00:40.502981 | orchestrator | Thursday 19 June 2025 10:00:39 +0000 (0:00:00.072) 0:00:08.299 ********* 2025-06-19 10:00:40.502992 | orchestrator | changed: [testbed-manager] 2025-06-19 10:00:40.503003 | orchestrator | 2025-06-19 10:00:40.503014 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:00:40.503025 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-19 10:00:40.503045 | orchestrator | 2025-06-19 10:00:40.503056 | orchestrator | 2025-06-19 10:00:40.503066 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:00:40.503077 | orchestrator | Thursday 19 June 2025 10:00:40 +0000 (0:00:01.058) 0:00:09.357 ********* 2025-06-19 10:00:40.503088 | orchestrator | =============================================================================== 2025-06-19 10:00:40.503099 | orchestrator | Gathering Facts --------------------------------------------------------- 3.62s 2025-06-19 10:00:40.503109 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2025-06-19 10:00:40.503120 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.06s 2025-06-19 10:00:40.503131 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.00s 2025-06-19 10:00:40.503141 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.00s 2025-06-19 10:00:40.503152 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2025-06-19 10:00:40.503182 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-06-19 10:00:40.503193 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-06-19 10:00:40.503204 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-06-19 10:00:40.503214 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-06-19 10:00:40.503225 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-06-19 10:00:40.503236 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-06-19 10:00:40.503246 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-06-19 10:00:40.735048 | orchestrator | + osism apply sshconfig 2025-06-19 10:00:42.380536 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:00:42.380611 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:00:42.380626 | orchestrator | Registering Redlock._release_script 2025-06-19 10:00:42.443785 | orchestrator | 2025-06-19 10:00:42 | INFO  | Task 37ff19fe-6eab-4442-86d1-193c3a71e022 (sshconfig) was prepared for execution. 2025-06-19 10:00:42.443856 | orchestrator | 2025-06-19 10:00:42 | INFO  | It takes a moment until task 37ff19fe-6eab-4442-86d1-193c3a71e022 (sshconfig) has been started and output is visible here. 2025-06-19 10:00:53.664947 | orchestrator | 2025-06-19 10:00:53.665058 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-19 10:00:53.665073 | orchestrator | 2025-06-19 10:00:53.665085 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-19 10:00:53.665096 | orchestrator | Thursday 19 June 2025 10:00:46 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-06-19 10:00:53.665108 | orchestrator | ok: [testbed-manager] 2025-06-19 10:00:53.665120 | orchestrator | 2025-06-19 10:00:53.665131 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-19 10:00:53.665142 | orchestrator | Thursday 19 June 2025 10:00:46 +0000 (0:00:00.548) 0:00:00.708 ********* 2025-06-19 10:00:53.665153 | orchestrator | changed: [testbed-manager] 2025-06-19 10:00:53.665165 | orchestrator | 2025-06-19 10:00:53.665176 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-19 10:00:53.665186 | orchestrator | Thursday 19 June 2025 10:00:47 +0000 (0:00:00.500) 0:00:01.208 ********* 2025-06-19 10:00:53.665218 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-19 10:00:53.665230 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-19 10:00:53.665241 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-19 10:00:53.665252 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-19 10:00:53.665263 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-19 10:00:53.665274 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-19 10:00:53.665308 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-19 10:00:53.665319 | orchestrator | 2025-06-19 10:00:53.665330 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-19 10:00:53.665341 | orchestrator | Thursday 19 June 2025 10:00:52 +0000 (0:00:05.472) 0:00:06.681 ********* 2025-06-19 10:00:53.665352 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:00:53.665362 | orchestrator | 2025-06-19 10:00:53.665373 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-19 10:00:53.665384 | orchestrator | Thursday 19 June 2025 10:00:52 +0000 (0:00:00.063) 0:00:06.744 ********* 2025-06-19 10:00:53.665395 | orchestrator | changed: [testbed-manager] 2025-06-19 10:00:53.665444 | orchestrator | 2025-06-19 10:00:53.665456 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:00:53.665471 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:00:53.665484 | orchestrator | 2025-06-19 10:00:53.665496 | orchestrator | 2025-06-19 10:00:53.665509 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:00:53.665522 | orchestrator | Thursday 19 June 2025 10:00:53 +0000 (0:00:00.582) 0:00:07.327 ********* 2025-06-19 10:00:53.665534 | orchestrator | =============================================================================== 2025-06-19 10:00:53.665546 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.47s 2025-06-19 10:00:53.665558 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-06-19 10:00:53.665571 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2025-06-19 10:00:53.665583 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2025-06-19 10:00:53.665595 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-06-19 10:00:53.885644 | orchestrator | + osism apply known-hosts 2025-06-19 10:00:55.502255 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:00:55.502310 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:00:55.502325 | orchestrator | Registering Redlock._release_script 2025-06-19 10:00:55.560761 | orchestrator | 2025-06-19 10:00:55 | INFO  | Task 4b317b35-bb3f-4f3b-8c5b-63acc591248e (known-hosts) was prepared for execution. 2025-06-19 10:00:55.560835 | orchestrator | 2025-06-19 10:00:55 | INFO  | It takes a moment until task 4b317b35-bb3f-4f3b-8c5b-63acc591248e (known-hosts) has been started and output is visible here. 2025-06-19 10:01:12.029222 | orchestrator | 2025-06-19 10:01:12.029338 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-19 10:01:12.029355 | orchestrator | 2025-06-19 10:01:12.029368 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-19 10:01:12.029381 | orchestrator | Thursday 19 June 2025 10:00:59 +0000 (0:00:00.164) 0:00:00.164 ********* 2025-06-19 10:01:12.029393 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-19 10:01:12.029405 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-19 10:01:12.029416 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-19 10:01:12.029427 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-19 10:01:12.029489 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-19 10:01:12.029501 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-19 10:01:12.029511 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-19 10:01:12.029522 | orchestrator | 2025-06-19 10:01:12.029533 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-19 10:01:12.029546 | orchestrator | Thursday 19 June 2025 10:01:05 +0000 (0:00:05.812) 0:00:05.976 ********* 2025-06-19 10:01:12.029559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-19 10:01:12.029595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-19 10:01:12.029617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-19 10:01:12.029629 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-19 10:01:12.029641 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-19 10:01:12.029652 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-19 10:01:12.029663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-19 10:01:12.029674 | orchestrator | 2025-06-19 10:01:12.029685 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:12.029696 | orchestrator | Thursday 19 June 2025 10:01:05 +0000 (0:00:00.172) 0:00:06.149 ********* 2025-06-19 10:01:12.029707 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBDiZUNU4KW8pvkGVIraaZvlmjGdtfR+iSUfrTsrU6/o) 2025-06-19 10:01:12.029723 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtrCEmX0wT/2qg2v70q462OAETx3EwjrC4os8lQk2oveYOc1RFJLF135rTQdN2+heA0qO4rU7lOyzsKQNmQcGSakQUXbRNm/nJeeGnRV/RRH3hx/5twUBRnwPNdU3/Lpq8VOIfiE00ya70p4gBhqO1yktRL3iSoYnzMMIAR4x4t2mMQixrJ8tu86+v3kH3+lk0122+SOJxbbYW0DgTohzheWnoGdsJUdpMG/cFRwG9UdxAAKkdwyWxS23DADARKej3Wmnb5IGoHWr7mmXEUKbb8puX53g2w70jL9BE3y6oR5vErlqZun3u75t53sDpMB2nMWGXV/ATrdo3q+LYNQ5vwVsOoKv/N0pLEYZkfZbg+20NW/YDfYKU/bSIENti1pJ26lfTv+whHX20FeLuLIIiarTqXLLHvQ0adxlqwGxLn6NWokamRrXPlQiFT7tei3XP/YOTuTs0hWQPMxm/ZadioiUe/r1qYlbd1fEgv5Kk/X5ePmdM4ubqSirrJs/rcEU=) 2025-06-19 10:01:12.029740 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMY1OBsw/EQUNd8u5++ectI/Qfky8j1wM5/RKSvyE8Eg/MMVDz0wc5Q31H2vgpDutvpxgY3V6lio+njw+C1Rhmo=) 2025-06-19 10:01:12.029755 | orchestrator | 2025-06-19 10:01:12.029768 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:12.029780 | orchestrator | Thursday 19 June 2025 10:01:06 +0000 (0:00:01.221) 0:00:07.371 ********* 2025-06-19 10:01:12.029792 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIM8xf7/4JskhEa7Upd99dlMFTQLa64FNt6NqoIc706nEkIve+EW63axOdFNXLar5xjWGXQ7RDr173JoK7lxKb8=) 2025-06-19 10:01:12.029805 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKpMjsUtbEK08OxH5xpIVd+KS2qTk/O79OkAcvrfF7eT) 2025-06-19 10:01:12.029844 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH3NmCdIxQjti1jebtSO6nUnuz2c6/YbkJ9RP7RMkHaUjPO0lA00fgW/LqsWRaHGQGCIcaVO4kVneXmaZV4Ziye6CGATxfjLC3BF8bPbUbazjfV6diN/IW7L2IZHAWXfi8TVi2W3QDbXB1rLFzFe7Pfm0ecqt9jduMOjrrrqH1zUU2LwbwsUT4UYbcvfHN5VZEZYtAFEhbw/SgZIjUW0C+GLmxCuv8D/n+Zvwkg3fMaFNLRBcXXlfHp5Fm9EpALnxeuVMESBGpAj1RgXYqUcP5nKBFyWjGALQDzZHSuon7ugdQsfQrUS8Kff1RzvMftoacDWHnc4JvpBron8ElIQ5p8on1oUbCJSy+5oUaPdAJpVq0YDzMgq9/UiCRbmhoSlkyFmaHtGQxuqgW60kmQ/giEK0MfTIrcLMcQMleU6UevZlTe8XzA/bVhDeVSCSngMIfdXTOtGrn5VVFP1gPhgln72PHKVhPiEL5eqq9q/hhgNhef9YHHDysFtt12/6mK0s=) 2025-06-19 10:01:12.029858 | orchestrator | 2025-06-19 10:01:12.029880 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:12.029893 | orchestrator | Thursday 19 June 2025 10:01:07 +0000 (0:00:01.062) 0:00:08.434 ********* 2025-06-19 10:01:12.029907 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn6YrpYx8SMf9r6DA/sEtI6OvR+Q0JgScrveZnrlhCPP0N0CC3QzVIhdx82nM5x1fd6zV6cRH7AJefB6nbzgBIwki97OnKXtsDPFieDloAPt7EaRQwWxGpXrOh952ZB0N9VXQXI8VSSlALcHDpcCyyhmJSPWBpqHqXCU9rV32MNKrqwwFPPHNKTsAIUKqMLligQ2KSTXZxXsRweFZT0yWspRMnAC4+XhaKdxPWUqcQ4WtuwGZi4uL9xaOIAgPpBg/ZQjlrdm82zFk3ozgshHqCAkCJaAV+QxFMUiLCVSKq9MwpIuBA1vwxWt3WiT6OjBUpSaDR4lXOSNh3xv3svT3ndYvJe124Old7sLR2bcFPR5+Z1zRApkPTUNYuhDK7wvkvkRztWxZsne8npxF4DDB/y1xrI5xNaeZXdeXzTuum6SVQnM34AO1dg2JHwBS4y28/1GbErs6dIZq7vHBX5pWs1Es7T1J/B600aSurVH7VId0xTUrRBvwl+14obE4SHtc=) 2025-06-19 10:01:12.029985 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGgfS4ZFwFu2/JN0tzMrEuNgY0JBimNJhZsD8r1/YZosBQLMLIa1GwHoTpZENTw/jUCOhMyY4uLPWdf5P6a2xN8=) 2025-06-19 10:01:12.030000 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINk9s+deL5xwmhQeImkHQ5v9ZOT+DJVOOis0J8rDaw23) 2025-06-19 10:01:12.030012 | orchestrator | 2025-06-19 10:01:12.030088 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:12.030100 | orchestrator | Thursday 19 June 2025 10:01:08 +0000 (0:00:01.077) 0:00:09.511 ********* 2025-06-19 10:01:12.030111 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9lMklMoFqv/Mk1b9Enbmp96peiolujmDLKuRuhLCYAix6AsSBDCAk+MX0mNyWhahyWNMI3zh9Up+f7oxC0azifLLrMudz2bCtdsCzbGI6PVzxxwV8gwgmBl08hoh+YazxWN68WMeFjF+aTGOarXABvASaC6704kchO4pEXPEDisA4DWNdAw3sdcm6UIjBuBjQkoOF4glpkKs2/FvBQH9x0MWo57aaE/smn11hQ1jYAUFAXOylQaakpsZfCR42Kprkg9EK9yjjZOoRRE/rkLJ1vXu3ar2Rwr9k6zXv83r+xZAy/lNwUSh3nkUDty4si3M7uf2wjvY/u+3zi5vjpgRLa3OXf/pWX272+uquGESiYM3rMQVhvbOU4lMAZnhogbKHc+UW6841dpZ8KD94GIANTiC9fcF0rCBv1MMJqwplOCreAMLOhwOjb6M5TDnslJaripwgnhRdVAGvoBY4HO7m8AhQeKaB423EQiBTtaIrYF/01EPy3MsyvdOPZqW/hWE=) 2025-06-19 10:01:12.030122 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJS4He9egM8VC0qLXkTLIEOYEbsoDXMI4293buSP2RKR) 2025-06-19 10:01:12.030133 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKluWRduZY0oLUdwWB6WrXxQcLQG3tLOpX7XliG8Is9uWOLconeeGDHXsTRacmGNnhNEhRA8VZhzVrdh063boXI=) 2025-06-19 10:01:12.030144 | orchestrator | 2025-06-19 10:01:12.030155 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:12.030166 | orchestrator | Thursday 19 June 2025 10:01:09 +0000 (0:00:01.044) 0:00:10.556 ********* 2025-06-19 10:01:12.030177 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNZ9eJL6UIAxys0IvgpbO1Yf8vGEzF7+YBz+9fXzdruW4zqT0fjNlJdDbzacGGBjvlY1v6+H1ipG6OQwHxI+wDQ=) 2025-06-19 10:01:12.030189 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDG7S0sM2CIsCo0/CfEQCyiO8eDwjMGUPIwjZV41xruK8KMRLe2Tm+S7yE5Q+BWAzkXfyrBuNflruf08iVeHs8QqUno4bzAIjqlc20VpzKjGgBtOeuUzI0wfUSZieUTBm0oGpMpgsN5PTmMDOoZRxm8BHGZKIJAnFdBOgNburTXPl436gCx1oSboH5Op7nWB3xXel9UoeQ2N8tN2OrrVy0ka7Bo6dpxdlm7Ci1ltWKJDIGW3089T3t3whkGmp0Xd0bRoK/tyBuAyeZEh7CwBkTB1h1fVkze2/0Qlf2AdH6Y8nJ+7nlf8QiSixuWBcfPj4RkPjmOImY1kNT1jCrs2Bun9PkusHlCHueUFeAl1a37Rf5SCwFroS4kmKnA/6U7gygYa8EWBs9Ryo0MQsOGPlTyQxI2VqUImbeUWzGY1mEauuR9tlfjQ2NnPyp8Q3lGsg6Dn2+PXgr5/+CUdciFls7Ua3O0gvJGMlrUXDLTGL/GsQWyqnM9RHVL7/tRE6+3r0E=) 2025-06-19 10:01:12.030201 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINOGP6u5hVm0RoGnuJF/ECNqblq1Cys2LgeoiXFMJ4yc) 2025-06-19 10:01:12.030212 | orchestrator | 2025-06-19 10:01:12.030223 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:12.030234 | orchestrator | Thursday 19 June 2025 10:01:10 +0000 (0:00:01.066) 0:00:11.623 ********* 2025-06-19 10:01:12.030265 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGlofyMZ0XGrPCHQvzF4RsXQKzCj3JOzPc7lcuOQbuFptI8NDD7Tm026ydyDMLs1cfWyUQqRzTTwVNY5JCwjRGt3TRio/kGkcvOD+TLk7sw9rpPTYA5qoPc3gmknnrdvWkbO8Emb8PcNRuBZyIbVTGekXaTB5En2s7VPq/uWrj3DJi03lfPq8uag7I8VvjpqL10FGH+13ZD3fIkMWbd7UqbKjYXyvnOE+WYGJKg3eKRrLKJ3XuTULVtTfnuXnbTGgZvr3JB0RrpFlUQpYX2AIz9fax6nNgGVwvHkne50itRzoqdFvB0mjoXm8QzzbIZv7RWLIhrGi7TrCpWcvobumP73w7J8Gmr36xPe6z81TNvzXjdscM1Ydp6L/i/C+erY86Nl5l+YniuVA/owbrv064Ftdc5WiHUEihTNEeYOYxkv0zEBPGBTJ+zVhmV/S7fe0tJp8oToLv6Q8j1HWhhnecKRMEkCEu4gICLpy162MqF1rwo+NtEMEwKUWZWd7aa88=) 2025-06-19 10:01:22.835307 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO7Y8vQHasvVRf3fcwuPwWn/R85Kmsyou8laBMDU8Fllh4KJtyk5KutJT9cxh+e2rrakAUyGhvFaoAMQfB0UcU0=) 2025-06-19 10:01:22.835395 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICHl03oTrqMtKys0cjXI0pPZah9y1ovpH6sDG3PpWN5C) 2025-06-19 10:01:22.835403 | orchestrator | 2025-06-19 10:01:22.835410 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:22.835417 | orchestrator | Thursday 19 June 2025 10:01:12 +0000 (0:00:01.101) 0:00:12.725 ********* 2025-06-19 10:01:22.835423 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4X0mQYkf8zWOK70uw5+GcnQgUjtRMxnKTfXPm8z3z53fg283KlnEJJD+eXFpMg1sChWFkK6waQHSu0sgeG60b5HO5kONSOLJ3qS6QlZHfZVm8H5mCnRV+sMzwQvjreYl1F6qwbgbpYEPSoEUjqxTS1fPmq+BcUgZQW+9keZc3+xhm9xQPBEO4nT4WKDPVfZ3shdiFfT5vJgm/wcjKVt0FMh1zEh++vOviG+BphMRXYbToUG+6EZdnLUTokg/70/GnjBUTCQgv5SUOeew0dark6/Ouej9pUjEHeFdsSUnjfHHS4y5OQojsRvaX98CdAJIXN7dSCdSjIn8Aau1sBj9UIzv5A7ex5aLiyv49fg7yICe6Uvfu3qcXr7W4G6NIRrFQ3/8txGKdfZI1qDxocjGjYq4NNTJi0s0efOSW5paoLDKfMyZ/6fP4C7bT3/167Ja0Cfwm4D8o/cijhqoHR8P36KHwL6lNot2lJi+9Nz0rKa96Tt5shOa64e7Sj5VqLu8=) 2025-06-19 10:01:22.835442 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJtqDgsKHd7tkXLsF4SjRuzIqmCqRmiDuZHYt+pJ9DrTz1rssW4eeOwNV2ARsGRFJGeBXuA2uqD9cBZhbdCv2pA=) 2025-06-19 10:01:22.835490 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICkXeYKSUbjfY/9CBlf/bPdaR9NvM9r+Ia8y1Gyag5qd) 2025-06-19 10:01:22.835498 | orchestrator | 2025-06-19 10:01:22.835503 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-19 10:01:22.835508 | orchestrator | Thursday 19 June 2025 10:01:13 +0000 (0:00:01.058) 0:00:13.783 ********* 2025-06-19 10:01:22.835514 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-19 10:01:22.835519 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-19 10:01:22.835523 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-19 10:01:22.835528 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-19 10:01:22.835532 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-19 10:01:22.835537 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-19 10:01:22.835541 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-19 10:01:22.835546 | orchestrator | 2025-06-19 10:01:22.835551 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-19 10:01:22.835557 | orchestrator | Thursday 19 June 2025 10:01:18 +0000 (0:00:05.378) 0:00:19.162 ********* 2025-06-19 10:01:22.835563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-19 10:01:22.835569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-19 10:01:22.835574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-19 10:01:22.835594 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-19 10:01:22.835599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-19 10:01:22.835604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-19 10:01:22.835609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-19 10:01:22.835614 | orchestrator | 2025-06-19 10:01:22.835618 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:22.835623 | orchestrator | Thursday 19 June 2025 10:01:18 +0000 (0:00:00.159) 0:00:19.322 ********* 2025-06-19 10:01:22.835628 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBDiZUNU4KW8pvkGVIraaZvlmjGdtfR+iSUfrTsrU6/o) 2025-06-19 10:01:22.835650 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtrCEmX0wT/2qg2v70q462OAETx3EwjrC4os8lQk2oveYOc1RFJLF135rTQdN2+heA0qO4rU7lOyzsKQNmQcGSakQUXbRNm/nJeeGnRV/RRH3hx/5twUBRnwPNdU3/Lpq8VOIfiE00ya70p4gBhqO1yktRL3iSoYnzMMIAR4x4t2mMQixrJ8tu86+v3kH3+lk0122+SOJxbbYW0DgTohzheWnoGdsJUdpMG/cFRwG9UdxAAKkdwyWxS23DADARKej3Wmnb5IGoHWr7mmXEUKbb8puX53g2w70jL9BE3y6oR5vErlqZun3u75t53sDpMB2nMWGXV/ATrdo3q+LYNQ5vwVsOoKv/N0pLEYZkfZbg+20NW/YDfYKU/bSIENti1pJ26lfTv+whHX20FeLuLIIiarTqXLLHvQ0adxlqwGxLn6NWokamRrXPlQiFT7tei3XP/YOTuTs0hWQPMxm/ZadioiUe/r1qYlbd1fEgv5Kk/X5ePmdM4ubqSirrJs/rcEU=) 2025-06-19 10:01:22.835656 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMY1OBsw/EQUNd8u5++ectI/Qfky8j1wM5/RKSvyE8Eg/MMVDz0wc5Q31H2vgpDutvpxgY3V6lio+njw+C1Rhmo=) 2025-06-19 10:01:22.835661 | orchestrator | 2025-06-19 10:01:22.835665 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:22.835670 | orchestrator | Thursday 19 June 2025 10:01:19 +0000 (0:00:01.042) 0:00:20.364 ********* 2025-06-19 10:01:22.835675 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKpMjsUtbEK08OxH5xpIVd+KS2qTk/O79OkAcvrfF7eT) 2025-06-19 10:01:22.835680 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH3NmCdIxQjti1jebtSO6nUnuz2c6/YbkJ9RP7RMkHaUjPO0lA00fgW/LqsWRaHGQGCIcaVO4kVneXmaZV4Ziye6CGATxfjLC3BF8bPbUbazjfV6diN/IW7L2IZHAWXfi8TVi2W3QDbXB1rLFzFe7Pfm0ecqt9jduMOjrrrqH1zUU2LwbwsUT4UYbcvfHN5VZEZYtAFEhbw/SgZIjUW0C+GLmxCuv8D/n+Zvwkg3fMaFNLRBcXXlfHp5Fm9EpALnxeuVMESBGpAj1RgXYqUcP5nKBFyWjGALQDzZHSuon7ugdQsfQrUS8Kff1RzvMftoacDWHnc4JvpBron8ElIQ5p8on1oUbCJSy+5oUaPdAJpVq0YDzMgq9/UiCRbmhoSlkyFmaHtGQxuqgW60kmQ/giEK0MfTIrcLMcQMleU6UevZlTe8XzA/bVhDeVSCSngMIfdXTOtGrn5VVFP1gPhgln72PHKVhPiEL5eqq9q/hhgNhef9YHHDysFtt12/6mK0s=) 2025-06-19 10:01:22.835684 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIM8xf7/4JskhEa7Upd99dlMFTQLa64FNt6NqoIc706nEkIve+EW63axOdFNXLar5xjWGXQ7RDr173JoK7lxKb8=) 2025-06-19 10:01:22.835689 | orchestrator | 2025-06-19 10:01:22.835694 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:22.835698 | orchestrator | Thursday 19 June 2025 10:01:20 +0000 (0:00:01.051) 0:00:21.416 ********* 2025-06-19 10:01:22.835706 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCn6YrpYx8SMf9r6DA/sEtI6OvR+Q0JgScrveZnrlhCPP0N0CC3QzVIhdx82nM5x1fd6zV6cRH7AJefB6nbzgBIwki97OnKXtsDPFieDloAPt7EaRQwWxGpXrOh952ZB0N9VXQXI8VSSlALcHDpcCyyhmJSPWBpqHqXCU9rV32MNKrqwwFPPHNKTsAIUKqMLligQ2KSTXZxXsRweFZT0yWspRMnAC4+XhaKdxPWUqcQ4WtuwGZi4uL9xaOIAgPpBg/ZQjlrdm82zFk3ozgshHqCAkCJaAV+QxFMUiLCVSKq9MwpIuBA1vwxWt3WiT6OjBUpSaDR4lXOSNh3xv3svT3ndYvJe124Old7sLR2bcFPR5+Z1zRApkPTUNYuhDK7wvkvkRztWxZsne8npxF4DDB/y1xrI5xNaeZXdeXzTuum6SVQnM34AO1dg2JHwBS4y28/1GbErs6dIZq7vHBX5pWs1Es7T1J/B600aSurVH7VId0xTUrRBvwl+14obE4SHtc=) 2025-06-19 10:01:22.835716 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGgfS4ZFwFu2/JN0tzMrEuNgY0JBimNJhZsD8r1/YZosBQLMLIa1GwHoTpZENTw/jUCOhMyY4uLPWdf5P6a2xN8=) 2025-06-19 10:01:22.835721 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINk9s+deL5xwmhQeImkHQ5v9ZOT+DJVOOis0J8rDaw23) 2025-06-19 10:01:22.835726 | orchestrator | 2025-06-19 10:01:22.835731 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:22.835736 | orchestrator | Thursday 19 June 2025 10:01:21 +0000 (0:00:01.045) 0:00:22.461 ********* 2025-06-19 10:01:22.835741 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKluWRduZY0oLUdwWB6WrXxQcLQG3tLOpX7XliG8Is9uWOLconeeGDHXsTRacmGNnhNEhRA8VZhzVrdh063boXI=) 2025-06-19 10:01:22.835746 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJS4He9egM8VC0qLXkTLIEOYEbsoDXMI4293buSP2RKR) 2025-06-19 10:01:22.835754 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9lMklMoFqv/Mk1b9Enbmp96peiolujmDLKuRuhLCYAix6AsSBDCAk+MX0mNyWhahyWNMI3zh9Up+f7oxC0azifLLrMudz2bCtdsCzbGI6PVzxxwV8gwgmBl08hoh+YazxWN68WMeFjF+aTGOarXABvASaC6704kchO4pEXPEDisA4DWNdAw3sdcm6UIjBuBjQkoOF4glpkKs2/FvBQH9x0MWo57aaE/smn11hQ1jYAUFAXOylQaakpsZfCR42Kprkg9EK9yjjZOoRRE/rkLJ1vXu3ar2Rwr9k6zXv83r+xZAy/lNwUSh3nkUDty4si3M7uf2wjvY/u+3zi5vjpgRLa3OXf/pWX272+uquGESiYM3rMQVhvbOU4lMAZnhogbKHc+UW6841dpZ8KD94GIANTiC9fcF0rCBv1MMJqwplOCreAMLOhwOjb6M5TDnslJaripwgnhRdVAGvoBY4HO7m8AhQeKaB423EQiBTtaIrYF/01EPy3MsyvdOPZqW/hWE=) 2025-06-19 10:01:26.995377 | orchestrator | 2025-06-19 10:01:26.995556 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:26.995575 | orchestrator | Thursday 19 June 2025 10:01:22 +0000 (0:00:01.069) 0:00:23.531 ********* 2025-06-19 10:01:26.995590 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDG7S0sM2CIsCo0/CfEQCyiO8eDwjMGUPIwjZV41xruK8KMRLe2Tm+S7yE5Q+BWAzkXfyrBuNflruf08iVeHs8QqUno4bzAIjqlc20VpzKjGgBtOeuUzI0wfUSZieUTBm0oGpMpgsN5PTmMDOoZRxm8BHGZKIJAnFdBOgNburTXPl436gCx1oSboH5Op7nWB3xXel9UoeQ2N8tN2OrrVy0ka7Bo6dpxdlm7Ci1ltWKJDIGW3089T3t3whkGmp0Xd0bRoK/tyBuAyeZEh7CwBkTB1h1fVkze2/0Qlf2AdH6Y8nJ+7nlf8QiSixuWBcfPj4RkPjmOImY1kNT1jCrs2Bun9PkusHlCHueUFeAl1a37Rf5SCwFroS4kmKnA/6U7gygYa8EWBs9Ryo0MQsOGPlTyQxI2VqUImbeUWzGY1mEauuR9tlfjQ2NnPyp8Q3lGsg6Dn2+PXgr5/+CUdciFls7Ua3O0gvJGMlrUXDLTGL/GsQWyqnM9RHVL7/tRE6+3r0E=) 2025-06-19 10:01:26.995605 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNZ9eJL6UIAxys0IvgpbO1Yf8vGEzF7+YBz+9fXzdruW4zqT0fjNlJdDbzacGGBjvlY1v6+H1ipG6OQwHxI+wDQ=) 2025-06-19 10:01:26.995619 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINOGP6u5hVm0RoGnuJF/ECNqblq1Cys2LgeoiXFMJ4yc) 2025-06-19 10:01:26.995631 | orchestrator | 2025-06-19 10:01:26.995643 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:26.995654 | orchestrator | Thursday 19 June 2025 10:01:23 +0000 (0:00:01.083) 0:00:24.615 ********* 2025-06-19 10:01:26.995665 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICHl03oTrqMtKys0cjXI0pPZah9y1ovpH6sDG3PpWN5C) 2025-06-19 10:01:26.995676 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCGlofyMZ0XGrPCHQvzF4RsXQKzCj3JOzPc7lcuOQbuFptI8NDD7Tm026ydyDMLs1cfWyUQqRzTTwVNY5JCwjRGt3TRio/kGkcvOD+TLk7sw9rpPTYA5qoPc3gmknnrdvWkbO8Emb8PcNRuBZyIbVTGekXaTB5En2s7VPq/uWrj3DJi03lfPq8uag7I8VvjpqL10FGH+13ZD3fIkMWbd7UqbKjYXyvnOE+WYGJKg3eKRrLKJ3XuTULVtTfnuXnbTGgZvr3JB0RrpFlUQpYX2AIz9fax6nNgGVwvHkne50itRzoqdFvB0mjoXm8QzzbIZv7RWLIhrGi7TrCpWcvobumP73w7J8Gmr36xPe6z81TNvzXjdscM1Ydp6L/i/C+erY86Nl5l+YniuVA/owbrv064Ftdc5WiHUEihTNEeYOYxkv0zEBPGBTJ+zVhmV/S7fe0tJp8oToLv6Q8j1HWhhnecKRMEkCEu4gICLpy162MqF1rwo+NtEMEwKUWZWd7aa88=) 2025-06-19 10:01:26.995717 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO7Y8vQHasvVRf3fcwuPwWn/R85Kmsyou8laBMDU8Fllh4KJtyk5KutJT9cxh+e2rrakAUyGhvFaoAMQfB0UcU0=) 2025-06-19 10:01:26.995729 | orchestrator | 2025-06-19 10:01:26.995740 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-19 10:01:26.995751 | orchestrator | Thursday 19 June 2025 10:01:24 +0000 (0:00:01.013) 0:00:25.628 ********* 2025-06-19 10:01:26.995762 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJtqDgsKHd7tkXLsF4SjRuzIqmCqRmiDuZHYt+pJ9DrTz1rssW4eeOwNV2ARsGRFJGeBXuA2uqD9cBZhbdCv2pA=) 2025-06-19 10:01:26.995774 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4X0mQYkf8zWOK70uw5+GcnQgUjtRMxnKTfXPm8z3z53fg283KlnEJJD+eXFpMg1sChWFkK6waQHSu0sgeG60b5HO5kONSOLJ3qS6QlZHfZVm8H5mCnRV+sMzwQvjreYl1F6qwbgbpYEPSoEUjqxTS1fPmq+BcUgZQW+9keZc3+xhm9xQPBEO4nT4WKDPVfZ3shdiFfT5vJgm/wcjKVt0FMh1zEh++vOviG+BphMRXYbToUG+6EZdnLUTokg/70/GnjBUTCQgv5SUOeew0dark6/Ouej9pUjEHeFdsSUnjfHHS4y5OQojsRvaX98CdAJIXN7dSCdSjIn8Aau1sBj9UIzv5A7ex5aLiyv49fg7yICe6Uvfu3qcXr7W4G6NIRrFQ3/8txGKdfZI1qDxocjGjYq4NNTJi0s0efOSW5paoLDKfMyZ/6fP4C7bT3/167Ja0Cfwm4D8o/cijhqoHR8P36KHwL6lNot2lJi+9Nz0rKa96Tt5shOa64e7Sj5VqLu8=) 2025-06-19 10:01:26.995785 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICkXeYKSUbjfY/9CBlf/bPdaR9NvM9r+Ia8y1Gyag5qd) 2025-06-19 10:01:26.995796 | orchestrator | 2025-06-19 10:01:26.995807 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-19 10:01:26.995818 | orchestrator | Thursday 19 June 2025 10:01:25 +0000 (0:00:01.064) 0:00:26.692 ********* 2025-06-19 10:01:26.995829 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-19 10:01:26.995841 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-19 10:01:26.995851 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-19 10:01:26.995862 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-19 10:01:26.995873 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-19 10:01:26.995885 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-19 10:01:26.995896 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-19 10:01:26.995907 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:01:26.995921 | orchestrator | 2025-06-19 10:01:26.995968 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-19 10:01:26.995982 | orchestrator | Thursday 19 June 2025 10:01:26 +0000 (0:00:00.163) 0:00:26.855 ********* 2025-06-19 10:01:26.995994 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:01:26.996007 | orchestrator | 2025-06-19 10:01:26.996020 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-19 10:01:26.996031 | orchestrator | Thursday 19 June 2025 10:01:26 +0000 (0:00:00.060) 0:00:26.916 ********* 2025-06-19 10:01:26.996047 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:01:26.996058 | orchestrator | 2025-06-19 10:01:26.996069 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-19 10:01:26.996080 | orchestrator | Thursday 19 June 2025 10:01:26 +0000 (0:00:00.051) 0:00:26.967 ********* 2025-06-19 10:01:26.996091 | orchestrator | changed: [testbed-manager] 2025-06-19 10:01:26.996102 | orchestrator | 2025-06-19 10:01:26.996112 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:01:26.996132 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-19 10:01:26.996144 | orchestrator | 2025-06-19 10:01:26.996155 | orchestrator | 2025-06-19 10:01:26.996166 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:01:26.996177 | orchestrator | Thursday 19 June 2025 10:01:26 +0000 (0:00:00.478) 0:00:27.446 ********* 2025-06-19 10:01:26.996187 | orchestrator | =============================================================================== 2025-06-19 10:01:26.996198 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.81s 2025-06-19 10:01:26.996209 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.38s 2025-06-19 10:01:26.996221 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2025-06-19 10:01:26.996232 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-06-19 10:01:26.996242 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-19 10:01:26.996253 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-06-19 10:01:26.996264 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-19 10:01:26.996274 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-19 10:01:26.996303 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-19 10:01:26.996314 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-19 10:01:26.996325 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-19 10:01:26.996336 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-19 10:01:26.996347 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-19 10:01:26.996357 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-19 10:01:26.996368 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-19 10:01:26.996379 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-19 10:01:26.996390 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-06-19 10:01:26.996424 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-06-19 10:01:26.996443 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-06-19 10:01:26.996488 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-19 10:01:27.231604 | orchestrator | + osism apply squid 2025-06-19 10:01:28.825215 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:01:28.825283 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:01:28.825289 | orchestrator | Registering Redlock._release_script 2025-06-19 10:01:28.884110 | orchestrator | 2025-06-19 10:01:28 | INFO  | Task 42ab2e9b-f4a5-46fb-898a-edefea6f4a21 (squid) was prepared for execution. 2025-06-19 10:01:28.884146 | orchestrator | 2025-06-19 10:01:28 | INFO  | It takes a moment until task 42ab2e9b-f4a5-46fb-898a-edefea6f4a21 (squid) has been started and output is visible here. 2025-06-19 10:03:26.270707 | orchestrator | 2025-06-19 10:03:26.270867 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-19 10:03:26.270896 | orchestrator | 2025-06-19 10:03:26.270916 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-19 10:03:26.270937 | orchestrator | Thursday 19 June 2025 10:01:32 +0000 (0:00:00.164) 0:00:00.164 ********* 2025-06-19 10:03:26.270958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-19 10:03:26.270978 | orchestrator | 2025-06-19 10:03:26.271033 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-19 10:03:26.271054 | orchestrator | Thursday 19 June 2025 10:01:32 +0000 (0:00:00.083) 0:00:00.248 ********* 2025-06-19 10:03:26.271073 | orchestrator | ok: [testbed-manager] 2025-06-19 10:03:26.271092 | orchestrator | 2025-06-19 10:03:26.271111 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-19 10:03:26.271130 | orchestrator | Thursday 19 June 2025 10:01:34 +0000 (0:00:01.408) 0:00:01.657 ********* 2025-06-19 10:03:26.271150 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-19 10:03:26.271170 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-19 10:03:26.271189 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-19 10:03:26.271241 | orchestrator | 2025-06-19 10:03:26.271262 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-19 10:03:26.271281 | orchestrator | Thursday 19 June 2025 10:01:35 +0000 (0:00:01.146) 0:00:02.803 ********* 2025-06-19 10:03:26.271300 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-19 10:03:26.271320 | orchestrator | 2025-06-19 10:03:26.271340 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-19 10:03:26.271360 | orchestrator | Thursday 19 June 2025 10:01:36 +0000 (0:00:01.014) 0:00:03.817 ********* 2025-06-19 10:03:26.271380 | orchestrator | ok: [testbed-manager] 2025-06-19 10:03:26.271398 | orchestrator | 2025-06-19 10:03:26.271419 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-19 10:03:26.271439 | orchestrator | Thursday 19 June 2025 10:01:36 +0000 (0:00:00.374) 0:00:04.192 ********* 2025-06-19 10:03:26.271458 | orchestrator | changed: [testbed-manager] 2025-06-19 10:03:26.271479 | orchestrator | 2025-06-19 10:03:26.271499 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-19 10:03:26.271519 | orchestrator | Thursday 19 June 2025 10:01:37 +0000 (0:00:00.904) 0:00:05.096 ********* 2025-06-19 10:03:26.271540 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-19 10:03:26.271560 | orchestrator | ok: [testbed-manager] 2025-06-19 10:03:26.271579 | orchestrator | 2025-06-19 10:03:26.271598 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-19 10:03:26.271618 | orchestrator | Thursday 19 June 2025 10:02:12 +0000 (0:00:34.868) 0:00:39.965 ********* 2025-06-19 10:03:26.271637 | orchestrator | changed: [testbed-manager] 2025-06-19 10:03:26.271694 | orchestrator | 2025-06-19 10:03:26.271713 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-19 10:03:26.271732 | orchestrator | Thursday 19 June 2025 10:02:25 +0000 (0:00:12.686) 0:00:52.651 ********* 2025-06-19 10:03:26.271749 | orchestrator | Pausing for 60 seconds 2025-06-19 10:03:26.271767 | orchestrator | changed: [testbed-manager] 2025-06-19 10:03:26.271786 | orchestrator | 2025-06-19 10:03:26.271804 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-19 10:03:26.271824 | orchestrator | Thursday 19 June 2025 10:03:25 +0000 (0:01:00.079) 0:01:52.731 ********* 2025-06-19 10:03:26.271843 | orchestrator | ok: [testbed-manager] 2025-06-19 10:03:26.271862 | orchestrator | 2025-06-19 10:03:26.271881 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-19 10:03:26.271900 | orchestrator | Thursday 19 June 2025 10:03:25 +0000 (0:00:00.070) 0:01:52.802 ********* 2025-06-19 10:03:26.271919 | orchestrator | changed: [testbed-manager] 2025-06-19 10:03:26.271936 | orchestrator | 2025-06-19 10:03:26.271955 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:03:26.271974 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:03:26.271993 | orchestrator | 2025-06-19 10:03:26.272009 | orchestrator | 2025-06-19 10:03:26.272026 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:03:26.272043 | orchestrator | Thursday 19 June 2025 10:03:26 +0000 (0:00:00.615) 0:01:53.417 ********* 2025-06-19 10:03:26.272078 | orchestrator | =============================================================================== 2025-06-19 10:03:26.272117 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-06-19 10:03:26.272135 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.87s 2025-06-19 10:03:26.272152 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.69s 2025-06-19 10:03:26.272168 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.41s 2025-06-19 10:03:26.272185 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.15s 2025-06-19 10:03:26.272202 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.01s 2025-06-19 10:03:26.272219 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.90s 2025-06-19 10:03:26.272237 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.62s 2025-06-19 10:03:26.272255 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-06-19 10:03:26.272273 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-06-19 10:03:26.272290 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-06-19 10:03:26.481091 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-19 10:03:26.481456 | orchestrator | ++ semver latest 9.0.0 2025-06-19 10:03:26.530288 | orchestrator | + [[ -1 -lt 0 ]] 2025-06-19 10:03:26.530373 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-19 10:03:26.531032 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-19 10:03:28.108922 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:03:28.109020 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:03:28.109034 | orchestrator | Registering Redlock._release_script 2025-06-19 10:03:28.174576 | orchestrator | 2025-06-19 10:03:28 | INFO  | Task 9a86ce42-65f4-4710-9354-67fec687c1f2 (operator) was prepared for execution. 2025-06-19 10:03:28.174724 | orchestrator | 2025-06-19 10:03:28 | INFO  | It takes a moment until task 9a86ce42-65f4-4710-9354-67fec687c1f2 (operator) has been started and output is visible here. 2025-06-19 10:03:44.036004 | orchestrator | 2025-06-19 10:03:44.036123 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-19 10:03:44.036139 | orchestrator | 2025-06-19 10:03:44.036150 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-19 10:03:44.036162 | orchestrator | Thursday 19 June 2025 10:03:32 +0000 (0:00:00.144) 0:00:00.144 ********* 2025-06-19 10:03:44.036173 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:03:44.036185 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:03:44.036196 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:03:44.036207 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:03:44.036218 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:03:44.036228 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:03:44.036239 | orchestrator | 2025-06-19 10:03:44.036250 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-19 10:03:44.036261 | orchestrator | Thursday 19 June 2025 10:03:35 +0000 (0:00:03.213) 0:00:03.357 ********* 2025-06-19 10:03:44.036271 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:03:44.036282 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:03:44.036292 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:03:44.036303 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:03:44.036314 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:03:44.036340 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:03:44.036351 | orchestrator | 2025-06-19 10:03:44.036363 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-19 10:03:44.036374 | orchestrator | 2025-06-19 10:03:44.036384 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-19 10:03:44.036395 | orchestrator | Thursday 19 June 2025 10:03:36 +0000 (0:00:00.787) 0:00:04.145 ********* 2025-06-19 10:03:44.036406 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:03:44.036417 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:03:44.036449 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:03:44.036460 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:03:44.036471 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:03:44.036482 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:03:44.036492 | orchestrator | 2025-06-19 10:03:44.036503 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-19 10:03:44.036514 | orchestrator | Thursday 19 June 2025 10:03:36 +0000 (0:00:00.159) 0:00:04.305 ********* 2025-06-19 10:03:44.036525 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:03:44.036536 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:03:44.036548 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:03:44.036560 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:03:44.036573 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:03:44.036586 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:03:44.036599 | orchestrator | 2025-06-19 10:03:44.036612 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-19 10:03:44.036625 | orchestrator | Thursday 19 June 2025 10:03:36 +0000 (0:00:00.164) 0:00:04.469 ********* 2025-06-19 10:03:44.036638 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:03:44.036652 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:03:44.036663 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:03:44.036700 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:03:44.036712 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:03:44.036722 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:03:44.036733 | orchestrator | 2025-06-19 10:03:44.036744 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-19 10:03:44.036755 | orchestrator | Thursday 19 June 2025 10:03:37 +0000 (0:00:00.605) 0:00:05.074 ********* 2025-06-19 10:03:44.036766 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:03:44.036776 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:03:44.036787 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:03:44.036797 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:03:44.036808 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:03:44.036819 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:03:44.036829 | orchestrator | 2025-06-19 10:03:44.036840 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-19 10:03:44.036850 | orchestrator | Thursday 19 June 2025 10:03:37 +0000 (0:00:00.809) 0:00:05.883 ********* 2025-06-19 10:03:44.036861 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-19 10:03:44.036872 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-19 10:03:44.036883 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-19 10:03:44.036893 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-19 10:03:44.036904 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-19 10:03:44.036914 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-19 10:03:44.036925 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-19 10:03:44.036936 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-19 10:03:44.036946 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-19 10:03:44.036957 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-19 10:03:44.036967 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-19 10:03:44.036978 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-19 10:03:44.036989 | orchestrator | 2025-06-19 10:03:44.036999 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-19 10:03:44.037010 | orchestrator | Thursday 19 June 2025 10:03:38 +0000 (0:00:01.179) 0:00:07.063 ********* 2025-06-19 10:03:44.037021 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:03:44.037032 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:03:44.037043 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:03:44.037053 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:03:44.037064 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:03:44.037074 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:03:44.037085 | orchestrator | 2025-06-19 10:03:44.037096 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-19 10:03:44.037115 | orchestrator | Thursday 19 June 2025 10:03:40 +0000 (0:00:01.255) 0:00:08.318 ********* 2025-06-19 10:03:44.037127 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-19 10:03:44.037137 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-19 10:03:44.037148 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-19 10:03:44.037159 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-19 10:03:44.037186 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-19 10:03:44.037198 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-19 10:03:44.037208 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-19 10:03:44.037219 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-19 10:03:44.037230 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-19 10:03:44.037240 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-19 10:03:44.037251 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-19 10:03:44.037262 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-19 10:03:44.037272 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-19 10:03:44.037283 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-19 10:03:44.037294 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-19 10:03:44.037304 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-19 10:03:44.037315 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-19 10:03:44.037326 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-19 10:03:44.037337 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-19 10:03:44.037347 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-19 10:03:44.037358 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-19 10:03:44.037369 | orchestrator | 2025-06-19 10:03:44.037379 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-19 10:03:44.037390 | orchestrator | Thursday 19 June 2025 10:03:41 +0000 (0:00:01.243) 0:00:09.561 ********* 2025-06-19 10:03:44.037401 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:03:44.037411 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:03:44.037422 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:03:44.037433 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:03:44.037444 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:03:44.037455 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:03:44.037465 | orchestrator | 2025-06-19 10:03:44.037476 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-19 10:03:44.037495 | orchestrator | Thursday 19 June 2025 10:03:42 +0000 (0:00:00.583) 0:00:10.144 ********* 2025-06-19 10:03:44.037506 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:03:44.037517 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:03:44.037528 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:03:44.037538 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:03:44.037549 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:03:44.037560 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:03:44.037571 | orchestrator | 2025-06-19 10:03:44.037581 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-19 10:03:44.037592 | orchestrator | Thursday 19 June 2025 10:03:42 +0000 (0:00:00.195) 0:00:10.340 ********* 2025-06-19 10:03:44.037603 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-19 10:03:44.037629 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-19 10:03:44.037656 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:03:44.037706 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-19 10:03:44.037718 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-19 10:03:44.037728 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-19 10:03:44.037739 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-19 10:03:44.037750 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:03:44.037761 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:03:44.037772 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:03:44.037782 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:03:44.037793 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:03:44.037804 | orchestrator | 2025-06-19 10:03:44.037814 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-19 10:03:44.037826 | orchestrator | Thursday 19 June 2025 10:03:42 +0000 (0:00:00.656) 0:00:10.996 ********* 2025-06-19 10:03:44.037836 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:03:44.037847 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:03:44.037858 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:03:44.037868 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:03:44.037879 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:03:44.037890 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:03:44.037900 | orchestrator | 2025-06-19 10:03:44.037911 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-19 10:03:44.037922 | orchestrator | Thursday 19 June 2025 10:03:43 +0000 (0:00:00.148) 0:00:11.145 ********* 2025-06-19 10:03:44.037932 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:03:44.037943 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:03:44.037954 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:03:44.037965 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:03:44.037976 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:03:44.037986 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:03:44.037997 | orchestrator | 2025-06-19 10:03:44.038008 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-19 10:03:44.038073 | orchestrator | Thursday 19 June 2025 10:03:43 +0000 (0:00:00.155) 0:00:11.301 ********* 2025-06-19 10:03:44.038085 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:03:44.038095 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:03:44.038106 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:03:44.038117 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:03:44.038128 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:03:44.038138 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:03:44.038149 | orchestrator | 2025-06-19 10:03:44.038160 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-19 10:03:44.038170 | orchestrator | Thursday 19 June 2025 10:03:43 +0000 (0:00:00.141) 0:00:11.442 ********* 2025-06-19 10:03:44.038181 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:03:44.038192 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:03:44.038203 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:03:44.038213 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:03:44.038224 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:03:44.038243 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:03:44.465066 | orchestrator | 2025-06-19 10:03:44.465167 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-19 10:03:44.465189 | orchestrator | Thursday 19 June 2025 10:03:44 +0000 (0:00:00.643) 0:00:12.086 ********* 2025-06-19 10:03:44.465210 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:03:44.465231 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:03:44.465252 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:03:44.465271 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:03:44.465292 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:03:44.465310 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:03:44.465330 | orchestrator | 2025-06-19 10:03:44.465350 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:03:44.465371 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:03:44.465433 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:03:44.465446 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:03:44.465458 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:03:44.465468 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:03:44.465479 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:03:44.465490 | orchestrator | 2025-06-19 10:03:44.465501 | orchestrator | 2025-06-19 10:03:44.465512 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:03:44.465523 | orchestrator | Thursday 19 June 2025 10:03:44 +0000 (0:00:00.205) 0:00:12.291 ********* 2025-06-19 10:03:44.465534 | orchestrator | =============================================================================== 2025-06-19 10:03:44.465544 | orchestrator | Gathering Facts --------------------------------------------------------- 3.21s 2025-06-19 10:03:44.465555 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2025-06-19 10:03:44.465566 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.24s 2025-06-19 10:03:44.465577 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.18s 2025-06-19 10:03:44.465589 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2025-06-19 10:03:44.465599 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2025-06-19 10:03:44.465613 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.66s 2025-06-19 10:03:44.465625 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.64s 2025-06-19 10:03:44.465637 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2025-06-19 10:03:44.465649 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2025-06-19 10:03:44.465661 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2025-06-19 10:03:44.465705 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-06-19 10:03:44.465719 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-06-19 10:03:44.465731 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-06-19 10:03:44.465750 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-06-19 10:03:44.465770 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-06-19 10:03:44.465790 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-06-19 10:03:44.686284 | orchestrator | + osism apply --environment custom facts 2025-06-19 10:03:46.345184 | orchestrator | 2025-06-19 10:03:46 | INFO  | Trying to run play facts in environment custom 2025-06-19 10:03:46.349661 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:03:46.349766 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:03:46.349780 | orchestrator | Registering Redlock._release_script 2025-06-19 10:03:46.409420 | orchestrator | 2025-06-19 10:03:46 | INFO  | Task 214e5c51-ede0-4675-a4b9-65a4e5df8d72 (facts) was prepared for execution. 2025-06-19 10:03:46.409504 | orchestrator | 2025-06-19 10:03:46 | INFO  | It takes a moment until task 214e5c51-ede0-4675-a4b9-65a4e5df8d72 (facts) has been started and output is visible here. 2025-06-19 10:04:25.397641 | orchestrator | 2025-06-19 10:04:25.397826 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-19 10:04:25.397848 | orchestrator | 2025-06-19 10:04:25.397861 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-19 10:04:25.397873 | orchestrator | Thursday 19 June 2025 10:03:50 +0000 (0:00:00.085) 0:00:00.085 ********* 2025-06-19 10:04:25.397885 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:25.397897 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:25.397908 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:04:25.397919 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:04:25.397929 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:04:25.397940 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:25.397951 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:25.397961 | orchestrator | 2025-06-19 10:04:25.397972 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-19 10:04:25.397983 | orchestrator | Thursday 19 June 2025 10:03:51 +0000 (0:00:01.450) 0:00:01.535 ********* 2025-06-19 10:04:25.397994 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:25.398005 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:25.398077 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:04:25.398090 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:25.398101 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:04:25.398112 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:25.398122 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:04:25.398133 | orchestrator | 2025-06-19 10:04:25.398144 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-19 10:04:25.398155 | orchestrator | 2025-06-19 10:04:25.398166 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-19 10:04:25.398177 | orchestrator | Thursday 19 June 2025 10:03:52 +0000 (0:00:01.154) 0:00:02.689 ********* 2025-06-19 10:04:25.398190 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:25.398203 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:25.398215 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:25.398228 | orchestrator | 2025-06-19 10:04:25.398240 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-19 10:04:25.398253 | orchestrator | Thursday 19 June 2025 10:03:52 +0000 (0:00:00.109) 0:00:02.799 ********* 2025-06-19 10:04:25.398265 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:25.398278 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:25.398290 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:25.398302 | orchestrator | 2025-06-19 10:04:25.398315 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-19 10:04:25.398327 | orchestrator | Thursday 19 June 2025 10:03:53 +0000 (0:00:00.212) 0:00:03.011 ********* 2025-06-19 10:04:25.398340 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:25.398352 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:25.398366 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:25.398378 | orchestrator | 2025-06-19 10:04:25.398391 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-19 10:04:25.398423 | orchestrator | Thursday 19 June 2025 10:03:53 +0000 (0:00:00.208) 0:00:03.220 ********* 2025-06-19 10:04:25.398437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:04:25.398452 | orchestrator | 2025-06-19 10:04:25.398465 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-19 10:04:25.398477 | orchestrator | Thursday 19 June 2025 10:03:53 +0000 (0:00:00.135) 0:00:03.355 ********* 2025-06-19 10:04:25.398490 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:25.398502 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:25.398514 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:25.398526 | orchestrator | 2025-06-19 10:04:25.398538 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-19 10:04:25.398549 | orchestrator | Thursday 19 June 2025 10:03:53 +0000 (0:00:00.450) 0:00:03.805 ********* 2025-06-19 10:04:25.398584 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:04:25.398595 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:04:25.398606 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:04:25.398617 | orchestrator | 2025-06-19 10:04:25.398628 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-19 10:04:25.398639 | orchestrator | Thursday 19 June 2025 10:03:54 +0000 (0:00:00.125) 0:00:03.931 ********* 2025-06-19 10:04:25.398649 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:25.398660 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:25.398671 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:25.398681 | orchestrator | 2025-06-19 10:04:25.398692 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-19 10:04:25.398703 | orchestrator | Thursday 19 June 2025 10:03:55 +0000 (0:00:01.083) 0:00:05.014 ********* 2025-06-19 10:04:25.398736 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:25.398749 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:25.398760 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:25.398771 | orchestrator | 2025-06-19 10:04:25.398831 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-19 10:04:25.398843 | orchestrator | Thursday 19 June 2025 10:03:55 +0000 (0:00:00.432) 0:00:05.447 ********* 2025-06-19 10:04:25.398854 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:25.398865 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:25.398876 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:25.398886 | orchestrator | 2025-06-19 10:04:25.398897 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-19 10:04:25.398908 | orchestrator | Thursday 19 June 2025 10:03:56 +0000 (0:00:01.043) 0:00:06.490 ********* 2025-06-19 10:04:25.398919 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:25.398929 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:25.398940 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:25.398951 | orchestrator | 2025-06-19 10:04:25.398961 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-19 10:04:25.398972 | orchestrator | Thursday 19 June 2025 10:04:09 +0000 (0:00:12.993) 0:00:19.483 ********* 2025-06-19 10:04:25.398983 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:04:25.398994 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:04:25.399004 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:04:25.399015 | orchestrator | 2025-06-19 10:04:25.399026 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-19 10:04:25.399056 | orchestrator | Thursday 19 June 2025 10:04:09 +0000 (0:00:00.086) 0:00:19.570 ********* 2025-06-19 10:04:25.399067 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:25.399078 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:25.399089 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:25.399100 | orchestrator | 2025-06-19 10:04:25.399110 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-19 10:04:25.399121 | orchestrator | Thursday 19 June 2025 10:04:16 +0000 (0:00:06.951) 0:00:26.522 ********* 2025-06-19 10:04:25.399132 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:25.399143 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:25.399153 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:25.399164 | orchestrator | 2025-06-19 10:04:25.399175 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-19 10:04:25.399186 | orchestrator | Thursday 19 June 2025 10:04:17 +0000 (0:00:00.489) 0:00:27.011 ********* 2025-06-19 10:04:25.399196 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-19 10:04:25.399207 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-19 10:04:25.399218 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-19 10:04:25.399229 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-19 10:04:25.399239 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-19 10:04:25.399260 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-19 10:04:25.399271 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-19 10:04:25.399282 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-19 10:04:25.399299 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-19 10:04:25.399310 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-19 10:04:25.399321 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-19 10:04:25.399332 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-19 10:04:25.399342 | orchestrator | 2025-06-19 10:04:25.399353 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-19 10:04:25.399364 | orchestrator | Thursday 19 June 2025 10:04:20 +0000 (0:00:03.249) 0:00:30.261 ********* 2025-06-19 10:04:25.399374 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:25.399385 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:25.399396 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:25.399407 | orchestrator | 2025-06-19 10:04:25.399417 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-19 10:04:25.399428 | orchestrator | 2025-06-19 10:04:25.399439 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-19 10:04:25.399450 | orchestrator | Thursday 19 June 2025 10:04:21 +0000 (0:00:01.050) 0:00:31.312 ********* 2025-06-19 10:04:25.399461 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:25.399471 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:25.399482 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:25.399493 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:25.399503 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:25.399514 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:25.399525 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:25.399535 | orchestrator | 2025-06-19 10:04:25.399546 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:04:25.399558 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:04:25.399570 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:04:25.399582 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:04:25.399593 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:04:25.399604 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:04:25.399616 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:04:25.399627 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:04:25.399638 | orchestrator | 2025-06-19 10:04:25.399649 | orchestrator | 2025-06-19 10:04:25.399659 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:04:25.399671 | orchestrator | Thursday 19 June 2025 10:04:25 +0000 (0:00:03.903) 0:00:35.215 ********* 2025-06-19 10:04:25.399681 | orchestrator | =============================================================================== 2025-06-19 10:04:25.399692 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.99s 2025-06-19 10:04:25.399703 | orchestrator | Install required packages (Debian) -------------------------------------- 6.95s 2025-06-19 10:04:25.399744 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.90s 2025-06-19 10:04:25.399765 | orchestrator | Copy fact files --------------------------------------------------------- 3.25s 2025-06-19 10:04:25.399776 | orchestrator | Create custom facts directory ------------------------------------------- 1.45s 2025-06-19 10:04:25.399787 | orchestrator | Copy fact file ---------------------------------------------------------- 1.15s 2025-06-19 10:04:25.399805 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.08s 2025-06-19 10:04:25.616483 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.05s 2025-06-19 10:04:25.616575 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.04s 2025-06-19 10:04:25.616590 | orchestrator | Create custom facts directory ------------------------------------------- 0.49s 2025-06-19 10:04:25.616601 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.45s 2025-06-19 10:04:25.616612 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.43s 2025-06-19 10:04:25.616624 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-06-19 10:04:25.616635 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.21s 2025-06-19 10:04:25.616646 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-06-19 10:04:25.616657 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2025-06-19 10:04:25.616668 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-06-19 10:04:25.616678 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-06-19 10:04:25.883028 | orchestrator | + osism apply bootstrap 2025-06-19 10:04:27.491139 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:04:27.491264 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:04:27.491282 | orchestrator | Registering Redlock._release_script 2025-06-19 10:04:27.547571 | orchestrator | 2025-06-19 10:04:27 | INFO  | Task 1bc68fa6-dbb1-405a-b494-013c2e9724c4 (bootstrap) was prepared for execution. 2025-06-19 10:04:27.547652 | orchestrator | 2025-06-19 10:04:27 | INFO  | It takes a moment until task 1bc68fa6-dbb1-405a-b494-013c2e9724c4 (bootstrap) has been started and output is visible here. 2025-06-19 10:04:43.155926 | orchestrator | 2025-06-19 10:04:43.156044 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-19 10:04:43.156062 | orchestrator | 2025-06-19 10:04:43.156075 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-19 10:04:43.156087 | orchestrator | Thursday 19 June 2025 10:04:31 +0000 (0:00:00.146) 0:00:00.146 ********* 2025-06-19 10:04:43.156098 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:43.156111 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:43.156121 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:43.156132 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:43.156143 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:43.156154 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:43.156164 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:43.156175 | orchestrator | 2025-06-19 10:04:43.156186 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-19 10:04:43.156197 | orchestrator | 2025-06-19 10:04:43.156208 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-19 10:04:43.156218 | orchestrator | Thursday 19 June 2025 10:04:31 +0000 (0:00:00.191) 0:00:00.338 ********* 2025-06-19 10:04:43.156229 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:43.156240 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:43.156250 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:43.156261 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:43.156272 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:43.156282 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:43.156293 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:43.156304 | orchestrator | 2025-06-19 10:04:43.156314 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-19 10:04:43.156325 | orchestrator | 2025-06-19 10:04:43.156359 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-19 10:04:43.156371 | orchestrator | Thursday 19 June 2025 10:04:35 +0000 (0:00:03.665) 0:00:04.004 ********* 2025-06-19 10:04:43.156382 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-19 10:04:43.156393 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-19 10:04:43.156420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-19 10:04:43.156432 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-19 10:04:43.156443 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:04:43.156456 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-19 10:04:43.156469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:04:43.156483 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-19 10:04:43.156496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:04:43.156508 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-19 10:04:43.156520 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-19 10:04:43.156533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-19 10:04:43.156545 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-19 10:04:43.156559 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-19 10:04:43.156571 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-19 10:04:43.156583 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:04:43.156596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-19 10:04:43.156608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-19 10:04:43.156620 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-19 10:04:43.156633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-19 10:04:43.156651 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-19 10:04:43.156669 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-19 10:04:43.156688 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-19 10:04:43.156707 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-19 10:04:43.156724 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-19 10:04:43.156767 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:04:43.156780 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-19 10:04:43.156793 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-19 10:04:43.156806 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-19 10:04:43.156817 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-19 10:04:43.156827 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-19 10:04:43.156838 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-19 10:04:43.156849 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-19 10:04:43.156859 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-19 10:04:43.156870 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-19 10:04:43.156880 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-19 10:04:43.156891 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-19 10:04:43.156902 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-19 10:04:43.156912 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-19 10:04:43.156924 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-19 10:04:43.156935 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-19 10:04:43.156945 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:04:43.156956 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-19 10:04:43.156983 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-19 10:04:43.156995 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-19 10:04:43.157005 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:04:43.157035 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-19 10:04:43.157047 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-19 10:04:43.157058 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-19 10:04:43.157068 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:04:43.157079 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-19 10:04:43.157089 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:04:43.157100 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-19 10:04:43.157111 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-19 10:04:43.157121 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-19 10:04:43.157132 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:04:43.157143 | orchestrator | 2025-06-19 10:04:43.157154 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-19 10:04:43.157165 | orchestrator | 2025-06-19 10:04:43.157176 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-19 10:04:43.157186 | orchestrator | Thursday 19 June 2025 10:04:35 +0000 (0:00:00.422) 0:00:04.427 ********* 2025-06-19 10:04:43.157197 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:43.157207 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:43.157218 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:43.157229 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:43.157239 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:43.157250 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:43.157260 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:43.157286 | orchestrator | 2025-06-19 10:04:43.157297 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-19 10:04:43.157308 | orchestrator | Thursday 19 June 2025 10:04:36 +0000 (0:00:01.191) 0:00:05.618 ********* 2025-06-19 10:04:43.157319 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:43.157330 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:43.157340 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:43.157351 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:43.157361 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:43.157372 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:43.157383 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:43.157394 | orchestrator | 2025-06-19 10:04:43.157405 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-19 10:04:43.157415 | orchestrator | Thursday 19 June 2025 10:04:38 +0000 (0:00:01.299) 0:00:06.917 ********* 2025-06-19 10:04:43.157427 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:04:43.157441 | orchestrator | 2025-06-19 10:04:43.157452 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-19 10:04:43.157463 | orchestrator | Thursday 19 June 2025 10:04:38 +0000 (0:00:00.309) 0:00:07.227 ********* 2025-06-19 10:04:43.157474 | orchestrator | changed: [testbed-manager] 2025-06-19 10:04:43.157485 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:04:43.157495 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:43.157506 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:43.157517 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:04:43.157528 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:04:43.157539 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:43.157550 | orchestrator | 2025-06-19 10:04:43.157561 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-19 10:04:43.157571 | orchestrator | Thursday 19 June 2025 10:04:40 +0000 (0:00:02.123) 0:00:09.350 ********* 2025-06-19 10:04:43.157589 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:04:43.157601 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:04:43.157613 | orchestrator | 2025-06-19 10:04:43.157624 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-19 10:04:43.157635 | orchestrator | Thursday 19 June 2025 10:04:40 +0000 (0:00:00.325) 0:00:09.676 ********* 2025-06-19 10:04:43.157646 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:43.157657 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:43.157667 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:43.157678 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:04:43.157689 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:04:43.157699 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:04:43.157710 | orchestrator | 2025-06-19 10:04:43.157721 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-19 10:04:43.157751 | orchestrator | Thursday 19 June 2025 10:04:41 +0000 (0:00:01.066) 0:00:10.743 ********* 2025-06-19 10:04:43.157764 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:04:43.157775 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:43.157786 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:43.157796 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:04:43.157807 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:04:43.157817 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:04:43.157828 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:43.157839 | orchestrator | 2025-06-19 10:04:43.157850 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-19 10:04:43.157861 | orchestrator | Thursday 19 June 2025 10:04:42 +0000 (0:00:00.568) 0:00:11.312 ********* 2025-06-19 10:04:43.157871 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:04:43.157882 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:04:43.157893 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:04:43.157903 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:04:43.157914 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:04:43.157925 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:04:43.157936 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:43.157946 | orchestrator | 2025-06-19 10:04:43.157957 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-19 10:04:43.157969 | orchestrator | Thursday 19 June 2025 10:04:42 +0000 (0:00:00.433) 0:00:11.745 ********* 2025-06-19 10:04:43.157980 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:04:43.157991 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:04:43.158010 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:04:56.402917 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:04:56.403032 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:04:56.403047 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:04:56.403058 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:04:56.403070 | orchestrator | 2025-06-19 10:04:56.403083 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-19 10:04:56.403096 | orchestrator | Thursday 19 June 2025 10:04:43 +0000 (0:00:00.269) 0:00:12.014 ********* 2025-06-19 10:04:56.403109 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:04:56.403137 | orchestrator | 2025-06-19 10:04:56.403148 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-19 10:04:56.403160 | orchestrator | Thursday 19 June 2025 10:04:43 +0000 (0:00:00.323) 0:00:12.338 ********* 2025-06-19 10:04:56.403171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:04:56.403208 | orchestrator | 2025-06-19 10:04:56.403220 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-19 10:04:56.403231 | orchestrator | Thursday 19 June 2025 10:04:43 +0000 (0:00:00.344) 0:00:12.683 ********* 2025-06-19 10:04:56.403242 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:56.403253 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:56.403263 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:56.403274 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:56.403285 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:56.403295 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:56.403306 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.403316 | orchestrator | 2025-06-19 10:04:56.403327 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-19 10:04:56.403338 | orchestrator | Thursday 19 June 2025 10:04:45 +0000 (0:00:01.314) 0:00:13.997 ********* 2025-06-19 10:04:56.403349 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:04:56.403359 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:04:56.403370 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:04:56.403380 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:04:56.403391 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:04:56.403401 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:04:56.403412 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:04:56.403422 | orchestrator | 2025-06-19 10:04:56.403433 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-19 10:04:56.403444 | orchestrator | Thursday 19 June 2025 10:04:45 +0000 (0:00:00.243) 0:00:14.241 ********* 2025-06-19 10:04:56.403457 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.403470 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:56.403481 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:56.403493 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:56.403505 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:56.403517 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:56.403528 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:56.403540 | orchestrator | 2025-06-19 10:04:56.403553 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-19 10:04:56.403565 | orchestrator | Thursday 19 June 2025 10:04:46 +0000 (0:00:00.592) 0:00:14.834 ********* 2025-06-19 10:04:56.403577 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:04:56.403589 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:04:56.403602 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:04:56.403614 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:04:56.403626 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:04:56.403638 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:04:56.403649 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:04:56.403661 | orchestrator | 2025-06-19 10:04:56.403674 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-19 10:04:56.403687 | orchestrator | Thursday 19 June 2025 10:04:46 +0000 (0:00:00.280) 0:00:15.114 ********* 2025-06-19 10:04:56.403699 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.403712 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:56.403724 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:56.403736 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:56.403768 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:04:56.403780 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:04:56.403793 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:04:56.403805 | orchestrator | 2025-06-19 10:04:56.403816 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-19 10:04:56.403827 | orchestrator | Thursday 19 June 2025 10:04:46 +0000 (0:00:00.548) 0:00:15.662 ********* 2025-06-19 10:04:56.403837 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.403848 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:56.403900 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:56.403922 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:56.403933 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:04:56.403943 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:04:56.403954 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:04:56.404006 | orchestrator | 2025-06-19 10:04:56.404019 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-19 10:04:56.404029 | orchestrator | Thursday 19 June 2025 10:04:47 +0000 (0:00:01.065) 0:00:16.728 ********* 2025-06-19 10:04:56.404041 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:56.404052 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:56.404063 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:56.404074 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:56.404084 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:56.404095 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:56.404111 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.404122 | orchestrator | 2025-06-19 10:04:56.404133 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-19 10:04:56.404144 | orchestrator | Thursday 19 June 2025 10:04:50 +0000 (0:00:02.101) 0:00:18.829 ********* 2025-06-19 10:04:56.404175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:04:56.404187 | orchestrator | 2025-06-19 10:04:56.404198 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-19 10:04:56.404209 | orchestrator | Thursday 19 June 2025 10:04:50 +0000 (0:00:00.423) 0:00:19.253 ********* 2025-06-19 10:04:56.404219 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:04:56.404230 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:04:56.404241 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:04:56.404252 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:04:56.404262 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:04:56.404273 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:04:56.404284 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:04:56.404294 | orchestrator | 2025-06-19 10:04:56.404305 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-19 10:04:56.404316 | orchestrator | Thursday 19 June 2025 10:04:51 +0000 (0:00:01.353) 0:00:20.607 ********* 2025-06-19 10:04:56.404327 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.404338 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:56.404348 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:56.404359 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:56.404370 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:56.404381 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:56.404391 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:56.404402 | orchestrator | 2025-06-19 10:04:56.404413 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-19 10:04:56.404424 | orchestrator | Thursday 19 June 2025 10:04:52 +0000 (0:00:00.242) 0:00:20.850 ********* 2025-06-19 10:04:56.404435 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.404446 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:56.404456 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:56.404467 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:56.404477 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:56.404488 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:56.404498 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:56.404509 | orchestrator | 2025-06-19 10:04:56.404520 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-19 10:04:56.404531 | orchestrator | Thursday 19 June 2025 10:04:52 +0000 (0:00:00.217) 0:00:21.067 ********* 2025-06-19 10:04:56.404542 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.404552 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:56.404563 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:56.404574 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:56.404584 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:56.404603 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:56.404614 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:56.404625 | orchestrator | 2025-06-19 10:04:56.404636 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-19 10:04:56.404647 | orchestrator | Thursday 19 June 2025 10:04:52 +0000 (0:00:00.269) 0:00:21.337 ********* 2025-06-19 10:04:56.404658 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:04:56.404675 | orchestrator | 2025-06-19 10:04:56.404694 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-19 10:04:56.404712 | orchestrator | Thursday 19 June 2025 10:04:52 +0000 (0:00:00.307) 0:00:21.644 ********* 2025-06-19 10:04:56.404728 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.404771 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:56.404792 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:56.404810 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:56.404827 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:56.404838 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:56.404849 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:56.404859 | orchestrator | 2025-06-19 10:04:56.404871 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-19 10:04:56.404882 | orchestrator | Thursday 19 June 2025 10:04:53 +0000 (0:00:00.522) 0:00:22.167 ********* 2025-06-19 10:04:56.404892 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:04:56.404903 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:04:56.404915 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:04:56.404925 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:04:56.404936 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:04:56.404947 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:04:56.404957 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:04:56.404968 | orchestrator | 2025-06-19 10:04:56.404979 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-19 10:04:56.404990 | orchestrator | Thursday 19 June 2025 10:04:53 +0000 (0:00:00.255) 0:00:22.423 ********* 2025-06-19 10:04:56.405001 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.405012 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:56.405022 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:56.405033 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:56.405044 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:04:56.405055 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:04:56.405065 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:04:56.405076 | orchestrator | 2025-06-19 10:04:56.405087 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-19 10:04:56.405098 | orchestrator | Thursday 19 June 2025 10:04:54 +0000 (0:00:01.047) 0:00:23.470 ********* 2025-06-19 10:04:56.405109 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.405120 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:56.405130 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:56.405141 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:56.405152 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:04:56.405162 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:04:56.405173 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:04:56.405184 | orchestrator | 2025-06-19 10:04:56.405201 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-19 10:04:56.405212 | orchestrator | Thursday 19 June 2025 10:04:55 +0000 (0:00:00.578) 0:00:24.049 ********* 2025-06-19 10:04:56.405223 | orchestrator | ok: [testbed-manager] 2025-06-19 10:04:56.405234 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:04:56.405244 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:04:56.405255 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:04:56.405290 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:05:31.735593 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:05:31.735708 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:05:31.735749 | orchestrator | 2025-06-19 10:05:31.735763 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-19 10:05:31.735776 | orchestrator | Thursday 19 June 2025 10:04:56 +0000 (0:00:01.097) 0:00:25.146 ********* 2025-06-19 10:05:31.735848 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.735861 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.735872 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.735883 | orchestrator | changed: [testbed-manager] 2025-06-19 10:05:31.735894 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:05:31.735905 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:05:31.735916 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:05:31.735927 | orchestrator | 2025-06-19 10:05:31.735938 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-19 10:05:31.735949 | orchestrator | Thursday 19 June 2025 10:05:09 +0000 (0:00:12.882) 0:00:38.029 ********* 2025-06-19 10:05:31.735960 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.735971 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.735982 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.735993 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.736003 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.736014 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.736025 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.736036 | orchestrator | 2025-06-19 10:05:31.736047 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-19 10:05:31.736058 | orchestrator | Thursday 19 June 2025 10:05:09 +0000 (0:00:00.217) 0:00:38.247 ********* 2025-06-19 10:05:31.736069 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.736080 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.736091 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.736102 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.736115 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.736127 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.736140 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.736152 | orchestrator | 2025-06-19 10:05:31.736164 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-19 10:05:31.736177 | orchestrator | Thursday 19 June 2025 10:05:09 +0000 (0:00:00.242) 0:00:38.490 ********* 2025-06-19 10:05:31.736189 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.736201 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.736214 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.736227 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.736239 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.736251 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.736262 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.736273 | orchestrator | 2025-06-19 10:05:31.736284 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-19 10:05:31.736295 | orchestrator | Thursday 19 June 2025 10:05:09 +0000 (0:00:00.252) 0:00:38.742 ********* 2025-06-19 10:05:31.736307 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:05:31.736321 | orchestrator | 2025-06-19 10:05:31.736332 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-19 10:05:31.736343 | orchestrator | Thursday 19 June 2025 10:05:10 +0000 (0:00:00.311) 0:00:39.053 ********* 2025-06-19 10:05:31.736354 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.736365 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.736375 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.736386 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.736396 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.736407 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.736418 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.736428 | orchestrator | 2025-06-19 10:05:31.736439 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-19 10:05:31.736458 | orchestrator | Thursday 19 June 2025 10:05:11 +0000 (0:00:01.350) 0:00:40.404 ********* 2025-06-19 10:05:31.736469 | orchestrator | changed: [testbed-manager] 2025-06-19 10:05:31.736480 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:05:31.736491 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:05:31.736502 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:05:31.736513 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:05:31.736523 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:05:31.736535 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:05:31.736546 | orchestrator | 2025-06-19 10:05:31.736557 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-19 10:05:31.736568 | orchestrator | Thursday 19 June 2025 10:05:12 +0000 (0:00:01.045) 0:00:41.450 ********* 2025-06-19 10:05:31.736579 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.736590 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.736600 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.736611 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.736622 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.736633 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.736644 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.736655 | orchestrator | 2025-06-19 10:05:31.736666 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-19 10:05:31.736677 | orchestrator | Thursday 19 June 2025 10:05:13 +0000 (0:00:00.827) 0:00:42.277 ********* 2025-06-19 10:05:31.736689 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:05:31.736702 | orchestrator | 2025-06-19 10:05:31.736713 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-19 10:05:31.736724 | orchestrator | Thursday 19 June 2025 10:05:13 +0000 (0:00:00.280) 0:00:42.558 ********* 2025-06-19 10:05:31.736735 | orchestrator | changed: [testbed-manager] 2025-06-19 10:05:31.736746 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:05:31.736757 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:05:31.736768 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:05:31.736779 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:05:31.736807 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:05:31.736818 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:05:31.736829 | orchestrator | 2025-06-19 10:05:31.736857 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-19 10:05:31.736868 | orchestrator | Thursday 19 June 2025 10:05:14 +0000 (0:00:01.014) 0:00:43.573 ********* 2025-06-19 10:05:31.736879 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:05:31.736890 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:05:31.736901 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:05:31.736912 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:05:31.736922 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:05:31.736933 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:05:31.736944 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:05:31.736955 | orchestrator | 2025-06-19 10:05:31.736965 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-19 10:05:31.736976 | orchestrator | Thursday 19 June 2025 10:05:15 +0000 (0:00:00.302) 0:00:43.876 ********* 2025-06-19 10:05:31.736987 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:05:31.736998 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:05:31.737009 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:05:31.737019 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:05:31.737030 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:05:31.737041 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:05:31.737051 | orchestrator | changed: [testbed-manager] 2025-06-19 10:05:31.737062 | orchestrator | 2025-06-19 10:05:31.737073 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-19 10:05:31.737084 | orchestrator | Thursday 19 June 2025 10:05:26 +0000 (0:00:11.423) 0:00:55.299 ********* 2025-06-19 10:05:31.737101 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.737112 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.737123 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.737134 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.737145 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.737155 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.737166 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.737177 | orchestrator | 2025-06-19 10:05:31.737188 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-19 10:05:31.737199 | orchestrator | Thursday 19 June 2025 10:05:27 +0000 (0:00:01.115) 0:00:56.414 ********* 2025-06-19 10:05:31.737210 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.737221 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.737231 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.737242 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.737253 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.737263 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.737274 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.737285 | orchestrator | 2025-06-19 10:05:31.737296 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-19 10:05:31.737307 | orchestrator | Thursday 19 June 2025 10:05:28 +0000 (0:00:00.884) 0:00:57.299 ********* 2025-06-19 10:05:31.737318 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.737329 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.737339 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.737350 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.737361 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.737372 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.737382 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.737393 | orchestrator | 2025-06-19 10:05:31.737404 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-19 10:05:31.737415 | orchestrator | Thursday 19 June 2025 10:05:28 +0000 (0:00:00.230) 0:00:57.530 ********* 2025-06-19 10:05:31.737426 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.737437 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.737448 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.737458 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.737469 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.737480 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.737490 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.737501 | orchestrator | 2025-06-19 10:05:31.737529 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-19 10:05:31.737541 | orchestrator | Thursday 19 June 2025 10:05:29 +0000 (0:00:00.249) 0:00:57.779 ********* 2025-06-19 10:05:31.737552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:05:31.737564 | orchestrator | 2025-06-19 10:05:31.737575 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-19 10:05:31.737586 | orchestrator | Thursday 19 June 2025 10:05:29 +0000 (0:00:00.312) 0:00:58.092 ********* 2025-06-19 10:05:31.737596 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.737607 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.737618 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.737628 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.737639 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.737650 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.737660 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.737671 | orchestrator | 2025-06-19 10:05:31.737681 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-19 10:05:31.737692 | orchestrator | Thursday 19 June 2025 10:05:30 +0000 (0:00:01.581) 0:00:59.674 ********* 2025-06-19 10:05:31.737703 | orchestrator | changed: [testbed-manager] 2025-06-19 10:05:31.737714 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:05:31.737730 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:05:31.737741 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:05:31.737752 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:05:31.737762 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:05:31.737773 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:05:31.737802 | orchestrator | 2025-06-19 10:05:31.737814 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-19 10:05:31.737825 | orchestrator | Thursday 19 June 2025 10:05:31 +0000 (0:00:00.578) 0:01:00.252 ********* 2025-06-19 10:05:31.737840 | orchestrator | ok: [testbed-manager] 2025-06-19 10:05:31.737851 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:05:31.737862 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:05:31.737873 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:05:31.737883 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:05:31.737895 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:05:31.737906 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:05:31.737916 | orchestrator | 2025-06-19 10:05:31.737933 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-19 10:07:42.582904 | orchestrator | Thursday 19 June 2025 10:05:31 +0000 (0:00:00.226) 0:01:00.479 ********* 2025-06-19 10:07:42.583071 | orchestrator | ok: [testbed-manager] 2025-06-19 10:07:42.583090 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:07:42.583101 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:07:42.583111 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:07:42.583121 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:07:42.583130 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:07:42.583140 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:07:42.583150 | orchestrator | 2025-06-19 10:07:42.583160 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-19 10:07:42.583170 | orchestrator | Thursday 19 June 2025 10:05:32 +0000 (0:00:01.087) 0:01:01.567 ********* 2025-06-19 10:07:42.583180 | orchestrator | changed: [testbed-manager] 2025-06-19 10:07:42.583190 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:07:42.583200 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:07:42.583209 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:07:42.583219 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:07:42.583234 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:07:42.583251 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:07:42.583267 | orchestrator | 2025-06-19 10:07:42.583284 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-19 10:07:42.583300 | orchestrator | Thursday 19 June 2025 10:05:34 +0000 (0:00:01.627) 0:01:03.195 ********* 2025-06-19 10:07:42.583317 | orchestrator | ok: [testbed-manager] 2025-06-19 10:07:42.583335 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:07:42.583354 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:07:42.583370 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:07:42.583383 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:07:42.583393 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:07:42.583402 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:07:42.583412 | orchestrator | 2025-06-19 10:07:42.583422 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-19 10:07:42.583431 | orchestrator | Thursday 19 June 2025 10:05:36 +0000 (0:00:02.136) 0:01:05.332 ********* 2025-06-19 10:07:42.583441 | orchestrator | ok: [testbed-manager] 2025-06-19 10:07:42.583450 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:07:42.583460 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:07:42.583469 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:07:42.583478 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:07:42.583488 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:07:42.583497 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:07:42.583507 | orchestrator | 2025-06-19 10:07:42.583516 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-19 10:07:42.583526 | orchestrator | Thursday 19 June 2025 10:06:13 +0000 (0:00:36.872) 0:01:42.204 ********* 2025-06-19 10:07:42.583536 | orchestrator | changed: [testbed-manager] 2025-06-19 10:07:42.583568 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:07:42.583578 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:07:42.583587 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:07:42.583596 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:07:42.583606 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:07:42.583616 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:07:42.583625 | orchestrator | 2025-06-19 10:07:42.583635 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-19 10:07:42.583645 | orchestrator | Thursday 19 June 2025 10:07:27 +0000 (0:01:14.183) 0:02:56.388 ********* 2025-06-19 10:07:42.583654 | orchestrator | ok: [testbed-manager] 2025-06-19 10:07:42.583664 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:07:42.583673 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:07:42.583682 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:07:42.583692 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:07:42.583701 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:07:42.583710 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:07:42.583720 | orchestrator | 2025-06-19 10:07:42.583729 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-19 10:07:42.583740 | orchestrator | Thursday 19 June 2025 10:07:29 +0000 (0:00:01.647) 0:02:58.035 ********* 2025-06-19 10:07:42.583749 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:07:42.583759 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:07:42.583768 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:07:42.583777 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:07:42.583786 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:07:42.583796 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:07:42.583805 | orchestrator | changed: [testbed-manager] 2025-06-19 10:07:42.583814 | orchestrator | 2025-06-19 10:07:42.583824 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-19 10:07:42.583833 | orchestrator | Thursday 19 June 2025 10:07:40 +0000 (0:00:11.026) 0:03:09.062 ********* 2025-06-19 10:07:42.583856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-19 10:07:42.583879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-19 10:07:42.583915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-19 10:07:42.583933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-19 10:07:42.583943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-19 10:07:42.583961 | orchestrator | 2025-06-19 10:07:42.583996 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-19 10:07:42.584006 | orchestrator | Thursday 19 June 2025 10:07:40 +0000 (0:00:00.433) 0:03:09.495 ********* 2025-06-19 10:07:42.584015 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-19 10:07:42.584025 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-19 10:07:42.584034 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:07:42.584043 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:07:42.584053 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-19 10:07:42.584062 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-19 10:07:42.584071 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:07:42.584081 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:07:42.584090 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-19 10:07:42.584099 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-19 10:07:42.584109 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-19 10:07:42.584118 | orchestrator | 2025-06-19 10:07:42.584127 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-19 10:07:42.584137 | orchestrator | Thursday 19 June 2025 10:07:42 +0000 (0:00:01.711) 0:03:11.206 ********* 2025-06-19 10:07:42.584146 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-19 10:07:42.584157 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-19 10:07:42.584166 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-19 10:07:42.584175 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-19 10:07:42.584184 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-19 10:07:42.584194 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-19 10:07:42.584203 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-19 10:07:42.584212 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-19 10:07:42.584222 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-19 10:07:42.584231 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-19 10:07:42.584241 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-19 10:07:42.584250 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-19 10:07:42.584259 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-19 10:07:42.584269 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-19 10:07:42.584278 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-19 10:07:42.584287 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-19 10:07:42.584296 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-19 10:07:42.584306 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:07:42.584315 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-19 10:07:42.584332 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-19 10:07:42.584342 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-19 10:07:42.584358 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-19 10:07:51.138382 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-19 10:07:51.138499 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-19 10:07:51.138514 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-19 10:07:51.138529 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:07:51.138541 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-19 10:07:51.138552 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-19 10:07:51.138563 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-19 10:07:51.138574 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-19 10:07:51.138585 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-19 10:07:51.138596 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-19 10:07:51.138607 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-19 10:07:51.138618 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-19 10:07:51.138628 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-19 10:07:51.138639 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-19 10:07:51.138650 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-19 10:07:51.138661 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:07:51.138673 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-19 10:07:51.138685 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-19 10:07:51.138695 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-19 10:07:51.138706 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-19 10:07:51.138717 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-19 10:07:51.138728 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:07:51.138738 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-19 10:07:51.138749 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-19 10:07:51.138760 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-19 10:07:51.138770 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-19 10:07:51.138781 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-19 10:07:51.138792 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-19 10:07:51.138803 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-19 10:07:51.138813 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-19 10:07:51.138848 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-19 10:07:51.138859 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-19 10:07:51.138888 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-19 10:07:51.138900 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-19 10:07:51.138911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-19 10:07:51.138924 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-19 10:07:51.138936 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-19 10:07:51.138948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-19 10:07:51.138960 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-19 10:07:51.138972 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-19 10:07:51.139036 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-19 10:07:51.139050 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-19 10:07:51.139062 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-19 10:07:51.139093 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-19 10:07:51.139106 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-19 10:07:51.139118 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-19 10:07:51.139130 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-19 10:07:51.139142 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-19 10:07:51.139154 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-19 10:07:51.139166 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-19 10:07:51.139178 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-19 10:07:51.139191 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-19 10:07:51.139203 | orchestrator | 2025-06-19 10:07:51.139216 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-19 10:07:51.139229 | orchestrator | Thursday 19 June 2025 10:07:48 +0000 (0:00:05.722) 0:03:16.929 ********* 2025-06-19 10:07:51.139241 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-19 10:07:51.139253 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-19 10:07:51.139266 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-19 10:07:51.139277 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-19 10:07:51.139287 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-19 10:07:51.139298 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-19 10:07:51.139309 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-19 10:07:51.139319 | orchestrator | 2025-06-19 10:07:51.139329 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-19 10:07:51.139340 | orchestrator | Thursday 19 June 2025 10:07:49 +0000 (0:00:01.635) 0:03:18.564 ********* 2025-06-19 10:07:51.139359 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-19 10:07:51.139370 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:07:51.139381 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-19 10:07:51.139392 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:07:51.139403 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-19 10:07:51.139413 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:07:51.139424 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-19 10:07:51.139435 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:07:51.139446 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-19 10:07:51.139456 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-19 10:07:51.139467 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-19 10:07:51.139477 | orchestrator | 2025-06-19 10:07:51.139488 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-19 10:07:51.139499 | orchestrator | Thursday 19 June 2025 10:07:50 +0000 (0:00:00.495) 0:03:19.059 ********* 2025-06-19 10:07:51.139509 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-19 10:07:51.139520 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:07:51.139530 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-19 10:07:51.139541 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-19 10:07:51.139551 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:07:51.139562 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:07:51.139572 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-19 10:07:51.139583 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:07:51.139594 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-19 10:07:51.139605 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-19 10:07:51.139615 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-19 10:07:51.139626 | orchestrator | 2025-06-19 10:07:51.139641 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-19 10:07:51.139652 | orchestrator | Thursday 19 June 2025 10:07:50 +0000 (0:00:00.552) 0:03:19.612 ********* 2025-06-19 10:07:51.139663 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:07:51.139674 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:07:51.139684 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:07:51.139695 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:07:51.139705 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:07:51.139722 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:08:02.438586 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:08:02.438699 | orchestrator | 2025-06-19 10:08:02.438715 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-19 10:08:02.438726 | orchestrator | Thursday 19 June 2025 10:07:51 +0000 (0:00:00.273) 0:03:19.886 ********* 2025-06-19 10:08:02.438736 | orchestrator | ok: [testbed-manager] 2025-06-19 10:08:02.438747 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:08:02.438757 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:08:02.438767 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:08:02.438777 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:08:02.438787 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:08:02.438796 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:08:02.438827 | orchestrator | 2025-06-19 10:08:02.438837 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-19 10:08:02.438847 | orchestrator | Thursday 19 June 2025 10:07:56 +0000 (0:00:05.561) 0:03:25.447 ********* 2025-06-19 10:08:02.438857 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-19 10:08:02.438867 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-19 10:08:02.438877 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:08:02.438887 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-19 10:08:02.438897 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:08:02.438906 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:08:02.438916 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-19 10:08:02.438929 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:08:02.438940 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-19 10:08:02.438949 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:08:02.438959 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-19 10:08:02.438969 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:08:02.438978 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-19 10:08:02.438988 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:08:02.439032 | orchestrator | 2025-06-19 10:08:02.439043 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-19 10:08:02.439053 | orchestrator | Thursday 19 June 2025 10:07:56 +0000 (0:00:00.292) 0:03:25.739 ********* 2025-06-19 10:08:02.439063 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-19 10:08:02.439073 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-19 10:08:02.439082 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-19 10:08:02.439092 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-19 10:08:02.439101 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-19 10:08:02.439111 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-19 10:08:02.439121 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-19 10:08:02.439132 | orchestrator | 2025-06-19 10:08:02.439144 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-19 10:08:02.439156 | orchestrator | Thursday 19 June 2025 10:07:58 +0000 (0:00:01.021) 0:03:26.761 ********* 2025-06-19 10:08:02.439170 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:08:02.439184 | orchestrator | 2025-06-19 10:08:02.439195 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-19 10:08:02.439207 | orchestrator | Thursday 19 June 2025 10:07:58 +0000 (0:00:00.416) 0:03:27.178 ********* 2025-06-19 10:08:02.439218 | orchestrator | ok: [testbed-manager] 2025-06-19 10:08:02.439229 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:08:02.439241 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:08:02.439252 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:08:02.439263 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:08:02.439274 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:08:02.439285 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:08:02.439297 | orchestrator | 2025-06-19 10:08:02.439308 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-19 10:08:02.439319 | orchestrator | Thursday 19 June 2025 10:07:59 +0000 (0:00:01.303) 0:03:28.482 ********* 2025-06-19 10:08:02.439330 | orchestrator | ok: [testbed-manager] 2025-06-19 10:08:02.439341 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:08:02.439352 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:08:02.439363 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:08:02.439374 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:08:02.439386 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:08:02.439397 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:08:02.439407 | orchestrator | 2025-06-19 10:08:02.439417 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-19 10:08:02.439434 | orchestrator | Thursday 19 June 2025 10:08:00 +0000 (0:00:00.644) 0:03:29.126 ********* 2025-06-19 10:08:02.439443 | orchestrator | changed: [testbed-manager] 2025-06-19 10:08:02.439453 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:08:02.439463 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:08:02.439473 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:08:02.439482 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:08:02.439492 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:08:02.439502 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:08:02.439511 | orchestrator | 2025-06-19 10:08:02.439521 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-19 10:08:02.439531 | orchestrator | Thursday 19 June 2025 10:08:00 +0000 (0:00:00.591) 0:03:29.718 ********* 2025-06-19 10:08:02.439541 | orchestrator | ok: [testbed-manager] 2025-06-19 10:08:02.439551 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:08:02.439560 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:08:02.439570 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:08:02.439579 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:08:02.439589 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:08:02.439599 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:08:02.439608 | orchestrator | 2025-06-19 10:08:02.439632 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-19 10:08:02.439642 | orchestrator | Thursday 19 June 2025 10:08:01 +0000 (0:00:00.580) 0:03:30.298 ********* 2025-06-19 10:08:02.439672 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750326275.17, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:02.439687 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750326392.4188633, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:02.439698 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750326384.1897302, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:02.439709 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750326380.9714236, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:02.439719 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750326400.4992473, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:02.439736 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750326395.5801055, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:02.439746 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750326391.5913854, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:02.439773 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750326363.4846168, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:26.211426 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750326283.8421357, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:26.211565 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750326290.0926094, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:26.211582 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750326278.4137073, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:26.211595 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750326298.7735066, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:26.211629 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750326301.0464902, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:26.211641 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750326287.8479173, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:08:26.211653 | orchestrator | 2025-06-19 10:08:26.211667 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-19 10:08:26.211679 | orchestrator | Thursday 19 June 2025 10:08:02 +0000 (0:00:00.884) 0:03:31.183 ********* 2025-06-19 10:08:26.211690 | orchestrator | changed: [testbed-manager] 2025-06-19 10:08:26.211708 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:08:26.211719 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:08:26.211729 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:08:26.211740 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:08:26.211751 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:08:26.211762 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:08:26.211773 | orchestrator | 2025-06-19 10:08:26.211784 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-19 10:08:26.211795 | orchestrator | Thursday 19 June 2025 10:08:03 +0000 (0:00:01.145) 0:03:32.329 ********* 2025-06-19 10:08:26.211806 | orchestrator | changed: [testbed-manager] 2025-06-19 10:08:26.211816 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:08:26.211827 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:08:26.211837 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:08:26.211865 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:08:26.211877 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:08:26.211887 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:08:26.211898 | orchestrator | 2025-06-19 10:08:26.211909 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-19 10:08:26.211920 | orchestrator | Thursday 19 June 2025 10:08:04 +0000 (0:00:01.177) 0:03:33.506 ********* 2025-06-19 10:08:26.211931 | orchestrator | changed: [testbed-manager] 2025-06-19 10:08:26.211941 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:08:26.211955 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:08:26.211968 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:08:26.211980 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:08:26.211992 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:08:26.212004 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:08:26.212016 | orchestrator | 2025-06-19 10:08:26.212028 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-19 10:08:26.212070 | orchestrator | Thursday 19 June 2025 10:08:05 +0000 (0:00:01.226) 0:03:34.732 ********* 2025-06-19 10:08:26.212082 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:08:26.212094 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:08:26.212115 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:08:26.212127 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:08:26.212139 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:08:26.212152 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:08:26.212164 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:08:26.212176 | orchestrator | 2025-06-19 10:08:26.212189 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-19 10:08:26.212201 | orchestrator | Thursday 19 June 2025 10:08:06 +0000 (0:00:00.312) 0:03:35.044 ********* 2025-06-19 10:08:26.212213 | orchestrator | ok: [testbed-manager] 2025-06-19 10:08:26.212226 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:08:26.212238 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:08:26.212250 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:08:26.212261 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:08:26.212274 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:08:26.212285 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:08:26.212298 | orchestrator | 2025-06-19 10:08:26.212310 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-19 10:08:26.212322 | orchestrator | Thursday 19 June 2025 10:08:07 +0000 (0:00:00.734) 0:03:35.779 ********* 2025-06-19 10:08:26.212336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:08:26.212349 | orchestrator | 2025-06-19 10:08:26.212360 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-19 10:08:26.212371 | orchestrator | Thursday 19 June 2025 10:08:07 +0000 (0:00:00.396) 0:03:36.175 ********* 2025-06-19 10:08:26.212382 | orchestrator | ok: [testbed-manager] 2025-06-19 10:08:26.212393 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:08:26.212403 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:08:26.212414 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:08:26.212425 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:08:26.212435 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:08:26.212446 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:08:26.212457 | orchestrator | 2025-06-19 10:08:26.212468 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-19 10:08:26.212479 | orchestrator | Thursday 19 June 2025 10:08:14 +0000 (0:00:07.362) 0:03:43.538 ********* 2025-06-19 10:08:26.212489 | orchestrator | ok: [testbed-manager] 2025-06-19 10:08:26.212500 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:08:26.212511 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:08:26.212522 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:08:26.212533 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:08:26.212544 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:08:26.212554 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:08:26.212565 | orchestrator | 2025-06-19 10:08:26.212576 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-19 10:08:26.212587 | orchestrator | Thursday 19 June 2025 10:08:15 +0000 (0:00:01.193) 0:03:44.731 ********* 2025-06-19 10:08:26.212597 | orchestrator | ok: [testbed-manager] 2025-06-19 10:08:26.212608 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:08:26.212619 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:08:26.212629 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:08:26.212640 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:08:26.212651 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:08:26.212661 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:08:26.212672 | orchestrator | 2025-06-19 10:08:26.212683 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-19 10:08:26.212694 | orchestrator | Thursday 19 June 2025 10:08:17 +0000 (0:00:01.042) 0:03:45.773 ********* 2025-06-19 10:08:26.212705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:08:26.212723 | orchestrator | 2025-06-19 10:08:26.212734 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-19 10:08:26.212744 | orchestrator | Thursday 19 June 2025 10:08:17 +0000 (0:00:00.468) 0:03:46.242 ********* 2025-06-19 10:08:26.212755 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:08:26.212771 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:08:26.212782 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:08:26.212793 | orchestrator | changed: [testbed-manager] 2025-06-19 10:08:26.212804 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:08:26.212815 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:08:26.212825 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:08:26.212836 | orchestrator | 2025-06-19 10:08:26.212847 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-19 10:08:26.212858 | orchestrator | Thursday 19 June 2025 10:08:25 +0000 (0:00:08.111) 0:03:54.353 ********* 2025-06-19 10:08:26.212868 | orchestrator | changed: [testbed-manager] 2025-06-19 10:08:26.212879 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:08:26.212890 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:08:26.212907 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:09:33.203571 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:09:33.203694 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:09:33.203709 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:09:33.203721 | orchestrator | 2025-06-19 10:09:33.203733 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-19 10:09:33.203745 | orchestrator | Thursday 19 June 2025 10:08:26 +0000 (0:00:00.601) 0:03:54.954 ********* 2025-06-19 10:09:33.203756 | orchestrator | changed: [testbed-manager] 2025-06-19 10:09:33.203768 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:09:33.203779 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:09:33.203789 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:09:33.203800 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:09:33.203811 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:09:33.203821 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:09:33.203832 | orchestrator | 2025-06-19 10:09:33.203843 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-19 10:09:33.203854 | orchestrator | Thursday 19 June 2025 10:08:27 +0000 (0:00:01.159) 0:03:56.114 ********* 2025-06-19 10:09:33.203865 | orchestrator | changed: [testbed-manager] 2025-06-19 10:09:33.203876 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:09:33.203887 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:09:33.203898 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:09:33.203908 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:09:33.203919 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:09:33.203930 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:09:33.203940 | orchestrator | 2025-06-19 10:09:33.203951 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-19 10:09:33.203962 | orchestrator | Thursday 19 June 2025 10:08:29 +0000 (0:00:01.963) 0:03:58.078 ********* 2025-06-19 10:09:33.203973 | orchestrator | ok: [testbed-manager] 2025-06-19 10:09:33.203985 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:09:33.203995 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:09:33.204006 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:09:33.204017 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:09:33.204027 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:09:33.204038 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:09:33.204049 | orchestrator | 2025-06-19 10:09:33.204060 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-19 10:09:33.204071 | orchestrator | Thursday 19 June 2025 10:08:29 +0000 (0:00:00.304) 0:03:58.382 ********* 2025-06-19 10:09:33.204082 | orchestrator | ok: [testbed-manager] 2025-06-19 10:09:33.204093 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:09:33.204104 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:09:33.204116 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:09:33.204128 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:09:33.204192 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:09:33.204206 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:09:33.204218 | orchestrator | 2025-06-19 10:09:33.204230 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-19 10:09:33.204243 | orchestrator | Thursday 19 June 2025 10:08:29 +0000 (0:00:00.311) 0:03:58.694 ********* 2025-06-19 10:09:33.204255 | orchestrator | ok: [testbed-manager] 2025-06-19 10:09:33.204267 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:09:33.204279 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:09:33.204292 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:09:33.204304 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:09:33.204315 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:09:33.204328 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:09:33.204340 | orchestrator | 2025-06-19 10:09:33.204352 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-19 10:09:33.204365 | orchestrator | Thursday 19 June 2025 10:08:30 +0000 (0:00:00.299) 0:03:58.993 ********* 2025-06-19 10:09:33.204377 | orchestrator | ok: [testbed-manager] 2025-06-19 10:09:33.204389 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:09:33.204401 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:09:33.204413 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:09:33.204425 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:09:33.204437 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:09:33.204449 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:09:33.204461 | orchestrator | 2025-06-19 10:09:33.204474 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-19 10:09:33.204486 | orchestrator | Thursday 19 June 2025 10:08:35 +0000 (0:00:05.524) 0:04:04.518 ********* 2025-06-19 10:09:33.204498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:09:33.204511 | orchestrator | 2025-06-19 10:09:33.204522 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-19 10:09:33.204533 | orchestrator | Thursday 19 June 2025 10:08:36 +0000 (0:00:00.383) 0:04:04.902 ********* 2025-06-19 10:09:33.204544 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-19 10:09:33.204555 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-19 10:09:33.204566 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:09:33.204576 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-19 10:09:33.204587 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-19 10:09:33.204598 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-19 10:09:33.204609 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:09:33.204620 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-19 10:09:33.204645 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-19 10:09:33.204656 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-19 10:09:33.204667 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:09:33.204678 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-19 10:09:33.204688 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:09:33.204699 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-19 10:09:33.204709 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-19 10:09:33.204720 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:09:33.204731 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-19 10:09:33.204759 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:09:33.204770 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-19 10:09:33.204781 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-19 10:09:33.204792 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:09:33.204802 | orchestrator | 2025-06-19 10:09:33.204822 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-19 10:09:33.204833 | orchestrator | Thursday 19 June 2025 10:08:36 +0000 (0:00:00.356) 0:04:05.258 ********* 2025-06-19 10:09:33.204844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:09:33.204855 | orchestrator | 2025-06-19 10:09:33.204866 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-19 10:09:33.204876 | orchestrator | Thursday 19 June 2025 10:08:36 +0000 (0:00:00.370) 0:04:05.629 ********* 2025-06-19 10:09:33.204887 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-19 10:09:33.204898 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:09:33.204908 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-19 10:09:33.204919 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:09:33.204930 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-19 10:09:33.204941 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-19 10:09:33.204951 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:09:33.204962 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-19 10:09:33.204972 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:09:33.204983 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:09:33.204994 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-19 10:09:33.205004 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:09:33.205015 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-19 10:09:33.205026 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:09:33.205036 | orchestrator | 2025-06-19 10:09:33.205047 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-19 10:09:33.205057 | orchestrator | Thursday 19 June 2025 10:08:37 +0000 (0:00:00.299) 0:04:05.928 ********* 2025-06-19 10:09:33.205068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:09:33.205079 | orchestrator | 2025-06-19 10:09:33.205090 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-19 10:09:33.205100 | orchestrator | Thursday 19 June 2025 10:08:37 +0000 (0:00:00.501) 0:04:06.429 ********* 2025-06-19 10:09:33.205111 | orchestrator | changed: [testbed-manager] 2025-06-19 10:09:33.205122 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:09:33.205132 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:09:33.205161 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:09:33.205172 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:09:33.205182 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:09:33.205193 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:09:33.205204 | orchestrator | 2025-06-19 10:09:33.205215 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-19 10:09:33.205226 | orchestrator | Thursday 19 June 2025 10:09:11 +0000 (0:00:33.425) 0:04:39.854 ********* 2025-06-19 10:09:33.205236 | orchestrator | changed: [testbed-manager] 2025-06-19 10:09:33.205247 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:09:33.205258 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:09:33.205268 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:09:33.205279 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:09:33.205290 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:09:33.205301 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:09:33.205312 | orchestrator | 2025-06-19 10:09:33.205322 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-19 10:09:33.205333 | orchestrator | Thursday 19 June 2025 10:09:18 +0000 (0:00:07.695) 0:04:47.550 ********* 2025-06-19 10:09:33.205351 | orchestrator | changed: [testbed-manager] 2025-06-19 10:09:33.205361 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:09:33.205372 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:09:33.205383 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:09:33.205394 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:09:33.205404 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:09:33.205415 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:09:33.205426 | orchestrator | 2025-06-19 10:09:33.205436 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-19 10:09:33.205447 | orchestrator | Thursday 19 June 2025 10:09:26 +0000 (0:00:07.537) 0:04:55.087 ********* 2025-06-19 10:09:33.205458 | orchestrator | ok: [testbed-manager] 2025-06-19 10:09:33.205469 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:09:33.205480 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:09:33.205490 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:09:33.205501 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:09:33.205512 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:09:33.205523 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:09:33.205533 | orchestrator | 2025-06-19 10:09:33.205544 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-19 10:09:33.205555 | orchestrator | Thursday 19 June 2025 10:09:27 +0000 (0:00:01.561) 0:04:56.648 ********* 2025-06-19 10:09:33.205566 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:09:33.205577 | orchestrator | changed: [testbed-manager] 2025-06-19 10:09:33.205588 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:09:33.205599 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:09:33.205609 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:09:33.205620 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:09:33.205631 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:09:33.205642 | orchestrator | 2025-06-19 10:09:33.205653 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-19 10:09:33.205669 | orchestrator | Thursday 19 June 2025 10:09:33 +0000 (0:00:05.293) 0:05:01.942 ********* 2025-06-19 10:09:44.039871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:09:44.039994 | orchestrator | 2025-06-19 10:09:44.040011 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-19 10:09:44.040024 | orchestrator | Thursday 19 June 2025 10:09:33 +0000 (0:00:00.411) 0:05:02.354 ********* 2025-06-19 10:09:44.040035 | orchestrator | changed: [testbed-manager] 2025-06-19 10:09:44.040047 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:09:44.040058 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:09:44.040069 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:09:44.040080 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:09:44.040091 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:09:44.040102 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:09:44.040113 | orchestrator | 2025-06-19 10:09:44.040124 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-19 10:09:44.040135 | orchestrator | Thursday 19 June 2025 10:09:34 +0000 (0:00:00.727) 0:05:03.081 ********* 2025-06-19 10:09:44.040147 | orchestrator | ok: [testbed-manager] 2025-06-19 10:09:44.040202 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:09:44.040214 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:09:44.040225 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:09:44.040267 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:09:44.040279 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:09:44.040290 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:09:44.040301 | orchestrator | 2025-06-19 10:09:44.040312 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-19 10:09:44.040323 | orchestrator | Thursday 19 June 2025 10:09:35 +0000 (0:00:01.625) 0:05:04.707 ********* 2025-06-19 10:09:44.040334 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:09:44.040367 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:09:44.040379 | orchestrator | changed: [testbed-manager] 2025-06-19 10:09:44.040390 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:09:44.040400 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:09:44.040413 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:09:44.040426 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:09:44.040439 | orchestrator | 2025-06-19 10:09:44.040452 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-19 10:09:44.040465 | orchestrator | Thursday 19 June 2025 10:09:36 +0000 (0:00:00.808) 0:05:05.515 ********* 2025-06-19 10:09:44.040478 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:09:44.040490 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:09:44.040503 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:09:44.040515 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:09:44.040528 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:09:44.040540 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:09:44.040553 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:09:44.040565 | orchestrator | 2025-06-19 10:09:44.040578 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-19 10:09:44.040591 | orchestrator | Thursday 19 June 2025 10:09:37 +0000 (0:00:00.278) 0:05:05.794 ********* 2025-06-19 10:09:44.040604 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:09:44.040616 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:09:44.040629 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:09:44.040642 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:09:44.040655 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:09:44.040667 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:09:44.040679 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:09:44.040692 | orchestrator | 2025-06-19 10:09:44.040704 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-19 10:09:44.040717 | orchestrator | Thursday 19 June 2025 10:09:37 +0000 (0:00:00.392) 0:05:06.186 ********* 2025-06-19 10:09:44.040730 | orchestrator | ok: [testbed-manager] 2025-06-19 10:09:44.040743 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:09:44.040756 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:09:44.040769 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:09:44.040781 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:09:44.040792 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:09:44.040802 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:09:44.040813 | orchestrator | 2025-06-19 10:09:44.040824 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-19 10:09:44.040835 | orchestrator | Thursday 19 June 2025 10:09:37 +0000 (0:00:00.277) 0:05:06.464 ********* 2025-06-19 10:09:44.040846 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:09:44.040857 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:09:44.040868 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:09:44.040878 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:09:44.040888 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:09:44.040899 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:09:44.040910 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:09:44.040920 | orchestrator | 2025-06-19 10:09:44.040931 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-19 10:09:44.040943 | orchestrator | Thursday 19 June 2025 10:09:37 +0000 (0:00:00.273) 0:05:06.738 ********* 2025-06-19 10:09:44.040954 | orchestrator | ok: [testbed-manager] 2025-06-19 10:09:44.040964 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:09:44.040975 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:09:44.040985 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:09:44.040996 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:09:44.041007 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:09:44.041023 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:09:44.041034 | orchestrator | 2025-06-19 10:09:44.041045 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-19 10:09:44.041055 | orchestrator | Thursday 19 June 2025 10:09:38 +0000 (0:00:00.299) 0:05:07.038 ********* 2025-06-19 10:09:44.041076 | orchestrator | ok: [testbed-manager] =>  2025-06-19 10:09:44.041087 | orchestrator |  docker_version: 5:27.5.1 2025-06-19 10:09:44.041098 | orchestrator | ok: [testbed-node-3] =>  2025-06-19 10:09:44.041108 | orchestrator |  docker_version: 5:27.5.1 2025-06-19 10:09:44.041119 | orchestrator | ok: [testbed-node-4] =>  2025-06-19 10:09:44.041129 | orchestrator |  docker_version: 5:27.5.1 2025-06-19 10:09:44.041140 | orchestrator | ok: [testbed-node-5] =>  2025-06-19 10:09:44.041151 | orchestrator |  docker_version: 5:27.5.1 2025-06-19 10:09:44.041180 | orchestrator | ok: [testbed-node-0] =>  2025-06-19 10:09:44.041191 | orchestrator |  docker_version: 5:27.5.1 2025-06-19 10:09:44.041220 | orchestrator | ok: [testbed-node-1] =>  2025-06-19 10:09:44.041232 | orchestrator |  docker_version: 5:27.5.1 2025-06-19 10:09:44.041243 | orchestrator | ok: [testbed-node-2] =>  2025-06-19 10:09:44.041254 | orchestrator |  docker_version: 5:27.5.1 2025-06-19 10:09:44.041264 | orchestrator | 2025-06-19 10:09:44.041276 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-19 10:09:44.041287 | orchestrator | Thursday 19 June 2025 10:09:38 +0000 (0:00:00.274) 0:05:07.313 ********* 2025-06-19 10:09:44.041297 | orchestrator | ok: [testbed-manager] =>  2025-06-19 10:09:44.041308 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-19 10:09:44.041319 | orchestrator | ok: [testbed-node-3] =>  2025-06-19 10:09:44.041329 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-19 10:09:44.041340 | orchestrator | ok: [testbed-node-4] =>  2025-06-19 10:09:44.041351 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-19 10:09:44.041362 | orchestrator | ok: [testbed-node-5] =>  2025-06-19 10:09:44.041372 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-19 10:09:44.041383 | orchestrator | ok: [testbed-node-0] =>  2025-06-19 10:09:44.041394 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-19 10:09:44.041404 | orchestrator | ok: [testbed-node-1] =>  2025-06-19 10:09:44.041415 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-19 10:09:44.041426 | orchestrator | ok: [testbed-node-2] =>  2025-06-19 10:09:44.041436 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-19 10:09:44.041447 | orchestrator | 2025-06-19 10:09:44.041458 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-19 10:09:44.041469 | orchestrator | Thursday 19 June 2025 10:09:38 +0000 (0:00:00.400) 0:05:07.713 ********* 2025-06-19 10:09:44.041480 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:09:44.041491 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:09:44.041501 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:09:44.041512 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:09:44.041523 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:09:44.041533 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:09:44.041544 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:09:44.041555 | orchestrator | 2025-06-19 10:09:44.041566 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-19 10:09:44.041577 | orchestrator | Thursday 19 June 2025 10:09:39 +0000 (0:00:00.271) 0:05:07.985 ********* 2025-06-19 10:09:44.041587 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:09:44.041598 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:09:44.041609 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:09:44.041619 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:09:44.041630 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:09:44.041641 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:09:44.041652 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:09:44.041662 | orchestrator | 2025-06-19 10:09:44.041674 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-19 10:09:44.041684 | orchestrator | Thursday 19 June 2025 10:09:39 +0000 (0:00:00.260) 0:05:08.246 ********* 2025-06-19 10:09:44.041697 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:09:44.041718 | orchestrator | 2025-06-19 10:09:44.041729 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-19 10:09:44.041740 | orchestrator | Thursday 19 June 2025 10:09:39 +0000 (0:00:00.421) 0:05:08.667 ********* 2025-06-19 10:09:44.041751 | orchestrator | ok: [testbed-manager] 2025-06-19 10:09:44.041762 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:09:44.041773 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:09:44.041784 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:09:44.041795 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:09:44.041805 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:09:44.041816 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:09:44.041827 | orchestrator | 2025-06-19 10:09:44.041838 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-19 10:09:44.041849 | orchestrator | Thursday 19 June 2025 10:09:40 +0000 (0:00:00.839) 0:05:09.506 ********* 2025-06-19 10:09:44.041860 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:09:44.041870 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:09:44.041881 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:09:44.041892 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:09:44.041903 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:09:44.041913 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:09:44.041924 | orchestrator | ok: [testbed-manager] 2025-06-19 10:09:44.041935 | orchestrator | 2025-06-19 10:09:44.041946 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-19 10:09:44.041958 | orchestrator | Thursday 19 June 2025 10:09:43 +0000 (0:00:02.733) 0:05:12.240 ********* 2025-06-19 10:09:44.041969 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-19 10:09:44.041980 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-19 10:09:44.041990 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-19 10:09:44.042001 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-19 10:09:44.042012 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-19 10:09:44.042081 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-19 10:09:44.042093 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:09:44.042109 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-19 10:09:44.042120 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-19 10:09:44.042130 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:09:44.042141 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-19 10:09:44.042179 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-19 10:09:44.042191 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-19 10:09:44.042202 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-19 10:09:44.042213 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:09:44.042224 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-19 10:09:44.042235 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-19 10:09:44.042254 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-19 10:10:40.794287 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:10:40.794424 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-19 10:10:40.794442 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-19 10:10:40.794455 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-19 10:10:40.794466 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:10:40.794477 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:10:40.794488 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-19 10:10:40.794499 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-19 10:10:40.794510 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-19 10:10:40.794520 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:10:40.794532 | orchestrator | 2025-06-19 10:10:40.794575 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-19 10:10:40.794588 | orchestrator | Thursday 19 June 2025 10:09:44 +0000 (0:00:00.754) 0:05:12.995 ********* 2025-06-19 10:10:40.794599 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:40.794609 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.794620 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.794631 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.794641 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.794652 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.794662 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.794673 | orchestrator | 2025-06-19 10:10:40.794683 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-19 10:10:40.794694 | orchestrator | Thursday 19 June 2025 10:09:50 +0000 (0:00:05.956) 0:05:18.951 ********* 2025-06-19 10:10:40.794705 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:40.794715 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.794725 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.794736 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.794747 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.794757 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.794767 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.794778 | orchestrator | 2025-06-19 10:10:40.794788 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-19 10:10:40.794799 | orchestrator | Thursday 19 June 2025 10:09:51 +0000 (0:00:01.030) 0:05:19.981 ********* 2025-06-19 10:10:40.794810 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:40.794820 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.794830 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.794842 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.794852 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.794863 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.794873 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.794884 | orchestrator | 2025-06-19 10:10:40.794894 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-19 10:10:40.794905 | orchestrator | Thursday 19 June 2025 10:09:58 +0000 (0:00:06.973) 0:05:26.955 ********* 2025-06-19 10:10:40.794916 | orchestrator | changed: [testbed-manager] 2025-06-19 10:10:40.794926 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.794937 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.794948 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.794958 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.794968 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.794979 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.794989 | orchestrator | 2025-06-19 10:10:40.795000 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-19 10:10:40.795011 | orchestrator | Thursday 19 June 2025 10:10:01 +0000 (0:00:03.183) 0:05:30.139 ********* 2025-06-19 10:10:40.795021 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:40.795032 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.795042 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.795053 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.795063 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.795073 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.795084 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.795094 | orchestrator | 2025-06-19 10:10:40.795105 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-19 10:10:40.795116 | orchestrator | Thursday 19 June 2025 10:10:02 +0000 (0:00:01.551) 0:05:31.690 ********* 2025-06-19 10:10:40.795126 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:40.795137 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.795147 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.795158 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.795168 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.795179 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.795196 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.795207 | orchestrator | 2025-06-19 10:10:40.795218 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-19 10:10:40.795229 | orchestrator | Thursday 19 June 2025 10:10:04 +0000 (0:00:01.310) 0:05:33.000 ********* 2025-06-19 10:10:40.795263 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:10:40.795274 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:10:40.795285 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:10:40.795295 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:10:40.795305 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:10:40.795316 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:10:40.795327 | orchestrator | changed: [testbed-manager] 2025-06-19 10:10:40.795337 | orchestrator | 2025-06-19 10:10:40.795364 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-19 10:10:40.795375 | orchestrator | Thursday 19 June 2025 10:10:04 +0000 (0:00:00.629) 0:05:33.630 ********* 2025-06-19 10:10:40.795386 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:40.795397 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.795407 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.795418 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.795428 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.795439 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.795450 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.795460 | orchestrator | 2025-06-19 10:10:40.795471 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-19 10:10:40.795482 | orchestrator | Thursday 19 June 2025 10:10:14 +0000 (0:00:09.413) 0:05:43.043 ********* 2025-06-19 10:10:40.795492 | orchestrator | changed: [testbed-manager] 2025-06-19 10:10:40.795525 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.795537 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.795547 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.795558 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.795568 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.795579 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.795589 | orchestrator | 2025-06-19 10:10:40.795600 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-19 10:10:40.795614 | orchestrator | Thursday 19 June 2025 10:10:15 +0000 (0:00:00.888) 0:05:43.932 ********* 2025-06-19 10:10:40.795633 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:40.795651 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.795668 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.795682 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.795693 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.795704 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.795714 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.795724 | orchestrator | 2025-06-19 10:10:40.795735 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-19 10:10:40.795746 | orchestrator | Thursday 19 June 2025 10:10:23 +0000 (0:00:08.704) 0:05:52.636 ********* 2025-06-19 10:10:40.795756 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:40.795767 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.795777 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.795788 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.795798 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.795809 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.795819 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.795830 | orchestrator | 2025-06-19 10:10:40.795841 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-19 10:10:40.795851 | orchestrator | Thursday 19 June 2025 10:10:34 +0000 (0:00:10.507) 0:06:03.143 ********* 2025-06-19 10:10:40.795862 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-19 10:10:40.795873 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-19 10:10:40.795893 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-19 10:10:40.795904 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-19 10:10:40.795915 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-19 10:10:40.795925 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-19 10:10:40.795936 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-19 10:10:40.795947 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-19 10:10:40.795957 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-19 10:10:40.795968 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-19 10:10:40.795978 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-19 10:10:40.795989 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-19 10:10:40.796000 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-19 10:10:40.796010 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-19 10:10:40.796021 | orchestrator | 2025-06-19 10:10:40.796032 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-19 10:10:40.796043 | orchestrator | Thursday 19 June 2025 10:10:35 +0000 (0:00:01.326) 0:06:04.469 ********* 2025-06-19 10:10:40.796053 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:10:40.796064 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:10:40.796075 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:10:40.796085 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:10:40.796096 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:10:40.796107 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:10:40.796117 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:10:40.796128 | orchestrator | 2025-06-19 10:10:40.796139 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-19 10:10:40.796149 | orchestrator | Thursday 19 June 2025 10:10:36 +0000 (0:00:00.514) 0:06:04.984 ********* 2025-06-19 10:10:40.796160 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:40.796171 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:40.796181 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:40.796192 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:40.796203 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:40.796213 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:40.796224 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:40.796278 | orchestrator | 2025-06-19 10:10:40.796301 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-19 10:10:40.796322 | orchestrator | Thursday 19 June 2025 10:10:39 +0000 (0:00:03.745) 0:06:08.729 ********* 2025-06-19 10:10:40.796342 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:10:40.796359 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:10:40.796372 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:10:40.796382 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:10:40.796393 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:10:40.796403 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:10:40.796414 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:10:40.796424 | orchestrator | 2025-06-19 10:10:40.796436 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-19 10:10:40.796447 | orchestrator | Thursday 19 June 2025 10:10:40 +0000 (0:00:00.514) 0:06:09.244 ********* 2025-06-19 10:10:40.796458 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-19 10:10:40.796468 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-19 10:10:40.796479 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:10:40.796490 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-19 10:10:40.796500 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-19 10:10:40.796511 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:10:40.796521 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-19 10:10:40.796532 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-19 10:10:40.796551 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:10:40.796562 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-19 10:10:40.796582 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-19 10:10:59.766636 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:10:59.766749 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-19 10:10:59.766766 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-19 10:10:59.766778 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:10:59.766789 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-19 10:10:59.766801 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-19 10:10:59.766811 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:10:59.766822 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-19 10:10:59.766833 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-19 10:10:59.766844 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:10:59.766855 | orchestrator | 2025-06-19 10:10:59.766867 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-19 10:10:59.766879 | orchestrator | Thursday 19 June 2025 10:10:41 +0000 (0:00:00.552) 0:06:09.797 ********* 2025-06-19 10:10:59.766890 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:10:59.766900 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:10:59.766911 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:10:59.766922 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:10:59.766933 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:10:59.766943 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:10:59.766954 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:10:59.766968 | orchestrator | 2025-06-19 10:10:59.766988 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-19 10:10:59.767008 | orchestrator | Thursday 19 June 2025 10:10:41 +0000 (0:00:00.494) 0:06:10.292 ********* 2025-06-19 10:10:59.767028 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:10:59.767047 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:10:59.767064 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:10:59.767075 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:10:59.767086 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:10:59.767096 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:10:59.767107 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:10:59.767117 | orchestrator | 2025-06-19 10:10:59.767128 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-19 10:10:59.767139 | orchestrator | Thursday 19 June 2025 10:10:42 +0000 (0:00:00.479) 0:06:10.771 ********* 2025-06-19 10:10:59.767149 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:10:59.767160 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:10:59.767170 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:10:59.767181 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:10:59.767191 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:10:59.767202 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:10:59.767212 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:10:59.767223 | orchestrator | 2025-06-19 10:10:59.767233 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-19 10:10:59.767244 | orchestrator | Thursday 19 June 2025 10:10:42 +0000 (0:00:00.676) 0:06:11.448 ********* 2025-06-19 10:10:59.767303 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:59.767323 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:10:59.767334 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:10:59.767345 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:10:59.767357 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:10:59.767375 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:10:59.767386 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:10:59.767397 | orchestrator | 2025-06-19 10:10:59.767407 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-19 10:10:59.767445 | orchestrator | Thursday 19 June 2025 10:10:44 +0000 (0:00:01.637) 0:06:13.086 ********* 2025-06-19 10:10:59.767457 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:10:59.767470 | orchestrator | 2025-06-19 10:10:59.767481 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-19 10:10:59.767492 | orchestrator | Thursday 19 June 2025 10:10:45 +0000 (0:00:00.828) 0:06:13.915 ********* 2025-06-19 10:10:59.767503 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:59.767513 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:59.767524 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:59.767552 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:59.767563 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:59.767574 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:59.767584 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:59.767594 | orchestrator | 2025-06-19 10:10:59.767605 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-19 10:10:59.767619 | orchestrator | Thursday 19 June 2025 10:10:45 +0000 (0:00:00.821) 0:06:14.736 ********* 2025-06-19 10:10:59.767638 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:59.767657 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:59.767676 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:59.767694 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:59.767713 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:59.767726 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:59.767736 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:59.767747 | orchestrator | 2025-06-19 10:10:59.767772 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-19 10:10:59.767788 | orchestrator | Thursday 19 June 2025 10:10:47 +0000 (0:00:01.037) 0:06:15.773 ********* 2025-06-19 10:10:59.767799 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:59.767810 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:59.767820 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:59.767831 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:59.767841 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:59.767851 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:59.767862 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:59.767872 | orchestrator | 2025-06-19 10:10:59.767883 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-19 10:10:59.767894 | orchestrator | Thursday 19 June 2025 10:10:48 +0000 (0:00:01.381) 0:06:17.155 ********* 2025-06-19 10:10:59.767921 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:10:59.767935 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:10:59.767953 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:10:59.767964 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:10:59.767975 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:10:59.767985 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:10:59.767996 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:10:59.768006 | orchestrator | 2025-06-19 10:10:59.768017 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-19 10:10:59.768028 | orchestrator | Thursday 19 June 2025 10:10:49 +0000 (0:00:01.378) 0:06:18.533 ********* 2025-06-19 10:10:59.768038 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:59.768049 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:59.768059 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:59.768070 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:59.768081 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:59.768091 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:59.768102 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:59.768112 | orchestrator | 2025-06-19 10:10:59.768123 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-19 10:10:59.768142 | orchestrator | Thursday 19 June 2025 10:10:51 +0000 (0:00:01.385) 0:06:19.919 ********* 2025-06-19 10:10:59.768153 | orchestrator | changed: [testbed-manager] 2025-06-19 10:10:59.768164 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:10:59.768175 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:10:59.768186 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:10:59.768197 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:10:59.768216 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:10:59.768235 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:10:59.768275 | orchestrator | 2025-06-19 10:10:59.768294 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-19 10:10:59.768310 | orchestrator | Thursday 19 June 2025 10:10:52 +0000 (0:00:01.416) 0:06:21.335 ********* 2025-06-19 10:10:59.768326 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:10:59.768343 | orchestrator | 2025-06-19 10:10:59.768359 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-19 10:10:59.768376 | orchestrator | Thursday 19 June 2025 10:10:53 +0000 (0:00:00.994) 0:06:22.330 ********* 2025-06-19 10:10:59.768395 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:59.768414 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:10:59.768429 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:10:59.768440 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:10:59.768450 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:10:59.768460 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:10:59.768471 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:10:59.768481 | orchestrator | 2025-06-19 10:10:59.768492 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-19 10:10:59.768506 | orchestrator | Thursday 19 June 2025 10:10:54 +0000 (0:00:01.346) 0:06:23.677 ********* 2025-06-19 10:10:59.768526 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:59.768542 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:10:59.768553 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:10:59.768563 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:10:59.768574 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:10:59.768585 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:10:59.768595 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:10:59.768606 | orchestrator | 2025-06-19 10:10:59.768617 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-19 10:10:59.768628 | orchestrator | Thursday 19 June 2025 10:10:56 +0000 (0:00:01.118) 0:06:24.795 ********* 2025-06-19 10:10:59.768639 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:59.768658 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:10:59.768670 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:10:59.768681 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:10:59.768691 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:10:59.768702 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:10:59.768713 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:10:59.768723 | orchestrator | 2025-06-19 10:10:59.768734 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-19 10:10:59.768745 | orchestrator | Thursday 19 June 2025 10:10:57 +0000 (0:00:01.400) 0:06:26.196 ********* 2025-06-19 10:10:59.768756 | orchestrator | ok: [testbed-manager] 2025-06-19 10:10:59.768766 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:10:59.768777 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:10:59.768788 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:10:59.768798 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:10:59.768809 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:10:59.768820 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:10:59.768831 | orchestrator | 2025-06-19 10:10:59.768841 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-19 10:10:59.768852 | orchestrator | Thursday 19 June 2025 10:10:58 +0000 (0:00:01.136) 0:06:27.332 ********* 2025-06-19 10:10:59.768864 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:10:59.768882 | orchestrator | 2025-06-19 10:10:59.768893 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-19 10:10:59.768937 | orchestrator | Thursday 19 June 2025 10:10:59 +0000 (0:00:00.875) 0:06:28.208 ********* 2025-06-19 10:10:59.768957 | orchestrator | 2025-06-19 10:10:59.768976 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-19 10:10:59.768994 | orchestrator | Thursday 19 June 2025 10:10:59 +0000 (0:00:00.039) 0:06:28.248 ********* 2025-06-19 10:10:59.769012 | orchestrator | 2025-06-19 10:10:59.769023 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-19 10:10:59.769033 | orchestrator | Thursday 19 June 2025 10:10:59 +0000 (0:00:00.039) 0:06:28.287 ********* 2025-06-19 10:10:59.769044 | orchestrator | 2025-06-19 10:10:59.769054 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-19 10:10:59.769065 | orchestrator | Thursday 19 June 2025 10:10:59 +0000 (0:00:00.048) 0:06:28.335 ********* 2025-06-19 10:10:59.769075 | orchestrator | 2025-06-19 10:10:59.769094 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-19 10:11:24.873812 | orchestrator | Thursday 19 June 2025 10:10:59 +0000 (0:00:00.039) 0:06:28.374 ********* 2025-06-19 10:11:24.873935 | orchestrator | 2025-06-19 10:11:24.873956 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-19 10:11:24.873976 | orchestrator | Thursday 19 June 2025 10:10:59 +0000 (0:00:00.038) 0:06:28.413 ********* 2025-06-19 10:11:24.874001 | orchestrator | 2025-06-19 10:11:24.874092 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-19 10:11:24.874113 | orchestrator | Thursday 19 June 2025 10:10:59 +0000 (0:00:00.045) 0:06:28.458 ********* 2025-06-19 10:11:24.874132 | orchestrator | 2025-06-19 10:11:24.874150 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-19 10:11:24.874165 | orchestrator | Thursday 19 June 2025 10:10:59 +0000 (0:00:00.039) 0:06:28.498 ********* 2025-06-19 10:11:24.874177 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:24.874189 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:24.874200 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:24.874210 | orchestrator | 2025-06-19 10:11:24.874221 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-19 10:11:24.874233 | orchestrator | Thursday 19 June 2025 10:11:01 +0000 (0:00:01.300) 0:06:29.798 ********* 2025-06-19 10:11:24.874245 | orchestrator | changed: [testbed-manager] 2025-06-19 10:11:24.874256 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:11:24.874266 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:11:24.874277 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:11:24.874331 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:11:24.874342 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:11:24.874355 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:11:24.874367 | orchestrator | 2025-06-19 10:11:24.874379 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-19 10:11:24.874391 | orchestrator | Thursday 19 June 2025 10:11:02 +0000 (0:00:01.520) 0:06:31.319 ********* 2025-06-19 10:11:24.874403 | orchestrator | changed: [testbed-manager] 2025-06-19 10:11:24.874415 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:11:24.874426 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:11:24.874438 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:11:24.874450 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:11:24.874462 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:11:24.874474 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:11:24.874486 | orchestrator | 2025-06-19 10:11:24.874497 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-19 10:11:24.874509 | orchestrator | Thursday 19 June 2025 10:11:03 +0000 (0:00:01.082) 0:06:32.402 ********* 2025-06-19 10:11:24.874582 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:11:24.874622 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:11:24.874633 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:11:24.874644 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:11:24.874654 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:11:24.874665 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:11:24.874675 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:11:24.874686 | orchestrator | 2025-06-19 10:11:24.874697 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-19 10:11:24.874707 | orchestrator | Thursday 19 June 2025 10:11:05 +0000 (0:00:02.252) 0:06:34.654 ********* 2025-06-19 10:11:24.874718 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:11:24.874728 | orchestrator | 2025-06-19 10:11:24.874739 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-19 10:11:24.874750 | orchestrator | Thursday 19 June 2025 10:11:06 +0000 (0:00:00.116) 0:06:34.771 ********* 2025-06-19 10:11:24.874760 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:24.874771 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:11:24.874781 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:11:24.874792 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:11:24.874802 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:11:24.874813 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:11:24.874823 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:11:24.874834 | orchestrator | 2025-06-19 10:11:24.874845 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-19 10:11:24.874857 | orchestrator | Thursday 19 June 2025 10:11:07 +0000 (0:00:00.998) 0:06:35.769 ********* 2025-06-19 10:11:24.874867 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:11:24.874877 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:11:24.874888 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:11:24.874898 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:11:24.874909 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:11:24.874919 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:11:24.874929 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:11:24.874940 | orchestrator | 2025-06-19 10:11:24.874951 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-19 10:11:24.874962 | orchestrator | Thursday 19 June 2025 10:11:07 +0000 (0:00:00.724) 0:06:36.493 ********* 2025-06-19 10:11:24.874973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:11:24.874987 | orchestrator | 2025-06-19 10:11:24.874998 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-19 10:11:24.875024 | orchestrator | Thursday 19 June 2025 10:11:08 +0000 (0:00:00.885) 0:06:37.378 ********* 2025-06-19 10:11:24.875035 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:24.875045 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:24.875056 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:24.875067 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:24.875077 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:24.875088 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:24.875098 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:24.875109 | orchestrator | 2025-06-19 10:11:24.875119 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-19 10:11:24.875130 | orchestrator | Thursday 19 June 2025 10:11:09 +0000 (0:00:00.823) 0:06:38.202 ********* 2025-06-19 10:11:24.875141 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-19 10:11:24.875152 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-19 10:11:24.875183 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-19 10:11:24.875195 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-19 10:11:24.875206 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-19 10:11:24.875229 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-19 10:11:24.875240 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-19 10:11:24.875251 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-19 10:11:24.875262 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-19 10:11:24.875272 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-19 10:11:24.875309 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-19 10:11:24.875327 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-19 10:11:24.875337 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-19 10:11:24.875348 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-19 10:11:24.875358 | orchestrator | 2025-06-19 10:11:24.875369 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-19 10:11:24.875380 | orchestrator | Thursday 19 June 2025 10:11:11 +0000 (0:00:02.557) 0:06:40.759 ********* 2025-06-19 10:11:24.875391 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:11:24.875401 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:11:24.875412 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:11:24.875422 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:11:24.875433 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:11:24.875443 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:11:24.875454 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:11:24.875464 | orchestrator | 2025-06-19 10:11:24.875475 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-19 10:11:24.875485 | orchestrator | Thursday 19 June 2025 10:11:12 +0000 (0:00:00.468) 0:06:41.227 ********* 2025-06-19 10:11:24.875498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:11:24.875511 | orchestrator | 2025-06-19 10:11:24.875561 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-19 10:11:24.875574 | orchestrator | Thursday 19 June 2025 10:11:13 +0000 (0:00:00.782) 0:06:42.009 ********* 2025-06-19 10:11:24.875585 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:24.875595 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:24.875606 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:24.875617 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:24.875627 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:24.875638 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:24.875649 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:24.875659 | orchestrator | 2025-06-19 10:11:24.875670 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-19 10:11:24.875681 | orchestrator | Thursday 19 June 2025 10:11:14 +0000 (0:00:01.035) 0:06:43.045 ********* 2025-06-19 10:11:24.875692 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:24.875702 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:24.875713 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:24.875723 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:24.875734 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:24.875744 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:24.875755 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:24.875765 | orchestrator | 2025-06-19 10:11:24.875776 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-19 10:11:24.875787 | orchestrator | Thursday 19 June 2025 10:11:15 +0000 (0:00:00.794) 0:06:43.839 ********* 2025-06-19 10:11:24.875797 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:11:24.875808 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:11:24.875819 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:11:24.875830 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:11:24.875840 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:11:24.875851 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:11:24.875870 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:11:24.875881 | orchestrator | 2025-06-19 10:11:24.875891 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-19 10:11:24.875902 | orchestrator | Thursday 19 June 2025 10:11:15 +0000 (0:00:00.486) 0:06:44.326 ********* 2025-06-19 10:11:24.875913 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:24.875923 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:24.875934 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:24.875944 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:24.875955 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:24.875966 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:24.875976 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:24.875987 | orchestrator | 2025-06-19 10:11:24.875997 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-19 10:11:24.876008 | orchestrator | Thursday 19 June 2025 10:11:16 +0000 (0:00:01.370) 0:06:45.696 ********* 2025-06-19 10:11:24.876019 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:11:24.876029 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:11:24.876040 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:11:24.876057 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:11:24.876068 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:11:24.876078 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:11:24.876089 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:11:24.876100 | orchestrator | 2025-06-19 10:11:24.876110 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-19 10:11:24.876121 | orchestrator | Thursday 19 June 2025 10:11:17 +0000 (0:00:00.492) 0:06:46.189 ********* 2025-06-19 10:11:24.876132 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:24.876142 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:11:24.876153 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:11:24.876164 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:11:24.876174 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:11:24.876185 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:11:24.876195 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:11:24.876206 | orchestrator | 2025-06-19 10:11:24.876224 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-19 10:11:56.100804 | orchestrator | Thursday 19 June 2025 10:11:24 +0000 (0:00:07.426) 0:06:53.615 ********* 2025-06-19 10:11:56.100916 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.100932 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:11:56.100944 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:11:56.100956 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:11:56.100967 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:11:56.100978 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:11:56.100988 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:11:56.101000 | orchestrator | 2025-06-19 10:11:56.101013 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-19 10:11:56.101024 | orchestrator | Thursday 19 June 2025 10:11:26 +0000 (0:00:01.345) 0:06:54.961 ********* 2025-06-19 10:11:56.101035 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.101046 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:11:56.101057 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:11:56.101067 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:11:56.101078 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:11:56.101089 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:11:56.101100 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:11:56.101110 | orchestrator | 2025-06-19 10:11:56.101121 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-19 10:11:56.101133 | orchestrator | Thursday 19 June 2025 10:11:27 +0000 (0:00:01.698) 0:06:56.659 ********* 2025-06-19 10:11:56.101143 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.101154 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:11:56.101165 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:11:56.101176 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:11:56.101213 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:11:56.101224 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:11:56.101235 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:11:56.101246 | orchestrator | 2025-06-19 10:11:56.101257 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-19 10:11:56.101268 | orchestrator | Thursday 19 June 2025 10:11:29 +0000 (0:00:01.680) 0:06:58.339 ********* 2025-06-19 10:11:56.101278 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.101289 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:56.101300 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:56.101311 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:56.101351 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:56.101364 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:56.101376 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:56.101387 | orchestrator | 2025-06-19 10:11:56.101399 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-19 10:11:56.101411 | orchestrator | Thursday 19 June 2025 10:11:30 +0000 (0:00:01.085) 0:06:59.425 ********* 2025-06-19 10:11:56.101423 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:11:56.101435 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:11:56.101447 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:11:56.101458 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:11:56.101469 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:11:56.101481 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:11:56.101493 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:11:56.101505 | orchestrator | 2025-06-19 10:11:56.101517 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-19 10:11:56.101529 | orchestrator | Thursday 19 June 2025 10:11:31 +0000 (0:00:00.821) 0:07:00.246 ********* 2025-06-19 10:11:56.101541 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:11:56.101553 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:11:56.101564 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:11:56.101576 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:11:56.101588 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:11:56.101600 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:11:56.101612 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:11:56.101624 | orchestrator | 2025-06-19 10:11:56.101636 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-19 10:11:56.101648 | orchestrator | Thursday 19 June 2025 10:11:31 +0000 (0:00:00.510) 0:07:00.757 ********* 2025-06-19 10:11:56.101661 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.101673 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:56.101685 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:56.101697 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:56.101707 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:56.101718 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:56.101728 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:56.101739 | orchestrator | 2025-06-19 10:11:56.101750 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-19 10:11:56.101760 | orchestrator | Thursday 19 June 2025 10:11:32 +0000 (0:00:00.663) 0:07:01.421 ********* 2025-06-19 10:11:56.101771 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.101782 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:56.101792 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:56.101802 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:56.101813 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:56.101823 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:56.101834 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:56.101844 | orchestrator | 2025-06-19 10:11:56.101855 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-19 10:11:56.101866 | orchestrator | Thursday 19 June 2025 10:11:33 +0000 (0:00:00.504) 0:07:01.925 ********* 2025-06-19 10:11:56.101876 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.101887 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:56.101897 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:56.101917 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:56.101928 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:56.101938 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:56.101948 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:56.101959 | orchestrator | 2025-06-19 10:11:56.101970 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-19 10:11:56.101980 | orchestrator | Thursday 19 June 2025 10:11:33 +0000 (0:00:00.535) 0:07:02.461 ********* 2025-06-19 10:11:56.101991 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.102001 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:56.102012 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:56.102079 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:56.102090 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:56.102100 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:56.102111 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:56.102121 | orchestrator | 2025-06-19 10:11:56.102132 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-19 10:11:56.102162 | orchestrator | Thursday 19 June 2025 10:11:39 +0000 (0:00:05.690) 0:07:08.152 ********* 2025-06-19 10:11:56.102174 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:11:56.102184 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:11:56.102195 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:11:56.102206 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:11:56.102216 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:11:56.102227 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:11:56.102237 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:11:56.102248 | orchestrator | 2025-06-19 10:11:56.102258 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-19 10:11:56.102269 | orchestrator | Thursday 19 June 2025 10:11:39 +0000 (0:00:00.518) 0:07:08.670 ********* 2025-06-19 10:11:56.102282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:11:56.102297 | orchestrator | 2025-06-19 10:11:56.102307 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-19 10:11:56.102367 | orchestrator | Thursday 19 June 2025 10:11:40 +0000 (0:00:00.959) 0:07:09.630 ********* 2025-06-19 10:11:56.102389 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.102406 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:56.102423 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:56.102434 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:56.102445 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:56.102455 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:56.102465 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:56.102476 | orchestrator | 2025-06-19 10:11:56.102487 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-19 10:11:56.102497 | orchestrator | Thursday 19 June 2025 10:11:42 +0000 (0:00:01.797) 0:07:11.428 ********* 2025-06-19 10:11:56.102508 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.102518 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:56.102529 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:56.102539 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:56.102550 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:56.102560 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:56.102571 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:56.102581 | orchestrator | 2025-06-19 10:11:56.102592 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-19 10:11:56.102603 | orchestrator | Thursday 19 June 2025 10:11:43 +0000 (0:00:01.101) 0:07:12.529 ********* 2025-06-19 10:11:56.102613 | orchestrator | ok: [testbed-manager] 2025-06-19 10:11:56.102624 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:11:56.102634 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:11:56.102645 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:11:56.102655 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:11:56.102675 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:11:56.102686 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:11:56.102696 | orchestrator | 2025-06-19 10:11:56.102707 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-19 10:11:56.102718 | orchestrator | Thursday 19 June 2025 10:11:44 +0000 (0:00:01.006) 0:07:13.536 ********* 2025-06-19 10:11:56.102729 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-19 10:11:56.102795 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-19 10:11:56.102807 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-19 10:11:56.102819 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-19 10:11:56.102829 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-19 10:11:56.102840 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-19 10:11:56.102897 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-19 10:11:56.102910 | orchestrator | 2025-06-19 10:11:56.102921 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-19 10:11:56.102932 | orchestrator | Thursday 19 June 2025 10:11:46 +0000 (0:00:01.716) 0:07:15.252 ********* 2025-06-19 10:11:56.102948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:11:56.102959 | orchestrator | 2025-06-19 10:11:56.102970 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-19 10:11:56.102981 | orchestrator | Thursday 19 June 2025 10:11:47 +0000 (0:00:00.801) 0:07:16.054 ********* 2025-06-19 10:11:56.102991 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:11:56.103002 | orchestrator | changed: [testbed-manager] 2025-06-19 10:11:56.103013 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:11:56.103023 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:11:56.103034 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:11:56.103044 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:11:56.103055 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:11:56.103065 | orchestrator | 2025-06-19 10:11:56.103076 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-19 10:11:56.103096 | orchestrator | Thursday 19 June 2025 10:11:56 +0000 (0:00:08.781) 0:07:24.835 ********* 2025-06-19 10:12:11.914602 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:11.915178 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:11.915209 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:11.915223 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:11.915236 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:11.915248 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:11.915260 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:11.915273 | orchestrator | 2025-06-19 10:12:11.915288 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-19 10:12:11.915301 | orchestrator | Thursday 19 June 2025 10:11:57 +0000 (0:00:01.694) 0:07:26.530 ********* 2025-06-19 10:12:11.915314 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:11.915326 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:11.915366 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:11.915379 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:11.915391 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:11.915429 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:11.915441 | orchestrator | 2025-06-19 10:12:11.915453 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-19 10:12:11.915465 | orchestrator | Thursday 19 June 2025 10:11:59 +0000 (0:00:01.329) 0:07:27.859 ********* 2025-06-19 10:12:11.915478 | orchestrator | changed: [testbed-manager] 2025-06-19 10:12:11.915491 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:12:11.915502 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:12:11.915513 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:12:11.915524 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:12:11.915534 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:12:11.915545 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:12:11.915556 | orchestrator | 2025-06-19 10:12:11.915566 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-19 10:12:11.915577 | orchestrator | 2025-06-19 10:12:11.915588 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-19 10:12:11.915598 | orchestrator | Thursday 19 June 2025 10:12:00 +0000 (0:00:01.416) 0:07:29.275 ********* 2025-06-19 10:12:11.915609 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:12:11.915620 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:12:11.915631 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:12:11.915641 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:12:11.915652 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:12:11.915662 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:12:11.915673 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:12:11.915684 | orchestrator | 2025-06-19 10:12:11.915694 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-19 10:12:11.915705 | orchestrator | 2025-06-19 10:12:11.915715 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-19 10:12:11.915726 | orchestrator | Thursday 19 June 2025 10:12:01 +0000 (0:00:00.518) 0:07:29.794 ********* 2025-06-19 10:12:11.915737 | orchestrator | changed: [testbed-manager] 2025-06-19 10:12:11.915748 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:12:11.915758 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:12:11.915769 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:12:11.915779 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:12:11.915790 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:12:11.915801 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:12:11.915811 | orchestrator | 2025-06-19 10:12:11.915822 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-19 10:12:11.915833 | orchestrator | Thursday 19 June 2025 10:12:02 +0000 (0:00:01.361) 0:07:31.156 ********* 2025-06-19 10:12:11.915843 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:11.915854 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:11.915865 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:11.915875 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:11.915886 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:11.915896 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:11.915907 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:11.915918 | orchestrator | 2025-06-19 10:12:11.915929 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-19 10:12:11.915939 | orchestrator | Thursday 19 June 2025 10:12:03 +0000 (0:00:01.415) 0:07:32.572 ********* 2025-06-19 10:12:11.915950 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:12:11.915961 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:12:11.915972 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:12:11.915982 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:12:11.915992 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:12:11.916003 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:12:11.916013 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:12:11.916024 | orchestrator | 2025-06-19 10:12:11.916035 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-19 10:12:11.916045 | orchestrator | Thursday 19 June 2025 10:12:04 +0000 (0:00:00.947) 0:07:33.519 ********* 2025-06-19 10:12:11.916063 | orchestrator | changed: [testbed-manager] 2025-06-19 10:12:11.916074 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:12:11.916085 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:12:11.916095 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:12:11.916106 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:12:11.916116 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:12:11.916127 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:12:11.916137 | orchestrator | 2025-06-19 10:12:11.916148 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-19 10:12:11.916158 | orchestrator | 2025-06-19 10:12:11.916169 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-19 10:12:11.916180 | orchestrator | Thursday 19 June 2025 10:12:06 +0000 (0:00:01.258) 0:07:34.778 ********* 2025-06-19 10:12:11.916191 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:12:11.916204 | orchestrator | 2025-06-19 10:12:11.916215 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-19 10:12:11.916225 | orchestrator | Thursday 19 June 2025 10:12:06 +0000 (0:00:00.956) 0:07:35.734 ********* 2025-06-19 10:12:11.916236 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:11.916246 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:11.916257 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:11.916267 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:11.916278 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:11.916289 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:11.916299 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:11.916310 | orchestrator | 2025-06-19 10:12:11.916361 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-19 10:12:11.916373 | orchestrator | Thursday 19 June 2025 10:12:07 +0000 (0:00:00.821) 0:07:36.556 ********* 2025-06-19 10:12:11.916384 | orchestrator | changed: [testbed-manager] 2025-06-19 10:12:11.916395 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:12:11.916405 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:12:11.916416 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:12:11.916470 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:12:11.916485 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:12:11.916495 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:12:11.916506 | orchestrator | 2025-06-19 10:12:11.916517 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-19 10:12:11.916528 | orchestrator | Thursday 19 June 2025 10:12:08 +0000 (0:00:01.133) 0:07:37.690 ********* 2025-06-19 10:12:11.916539 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:12:11.916549 | orchestrator | 2025-06-19 10:12:11.916560 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-19 10:12:11.916571 | orchestrator | Thursday 19 June 2025 10:12:09 +0000 (0:00:01.004) 0:07:38.694 ********* 2025-06-19 10:12:11.916614 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:11.916625 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:11.916636 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:11.916647 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:11.916657 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:11.916668 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:11.916678 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:11.916689 | orchestrator | 2025-06-19 10:12:11.916699 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-19 10:12:11.916710 | orchestrator | Thursday 19 June 2025 10:12:10 +0000 (0:00:00.815) 0:07:39.510 ********* 2025-06-19 10:12:11.916721 | orchestrator | changed: [testbed-manager] 2025-06-19 10:12:11.916732 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:12:11.916742 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:12:11.916752 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:12:11.916772 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:12:11.916782 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:12:11.916792 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:12:11.916803 | orchestrator | 2025-06-19 10:12:11.916814 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:12:11.916826 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-19 10:12:11.916837 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-19 10:12:11.916848 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-19 10:12:11.916859 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-19 10:12:11.916870 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-19 10:12:11.916880 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-19 10:12:11.916891 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-19 10:12:11.916901 | orchestrator | 2025-06-19 10:12:11.916912 | orchestrator | 2025-06-19 10:12:11.916923 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:12:11.916934 | orchestrator | Thursday 19 June 2025 10:12:11 +0000 (0:00:01.136) 0:07:40.646 ********* 2025-06-19 10:12:11.916945 | orchestrator | =============================================================================== 2025-06-19 10:12:11.916956 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.18s 2025-06-19 10:12:11.916966 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.87s 2025-06-19 10:12:11.916977 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.43s 2025-06-19 10:12:11.916987 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.88s 2025-06-19 10:12:11.916998 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.42s 2025-06-19 10:12:11.917008 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.03s 2025-06-19 10:12:11.917034 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.51s 2025-06-19 10:12:11.917045 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.41s 2025-06-19 10:12:11.917056 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.78s 2025-06-19 10:12:11.917066 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.70s 2025-06-19 10:12:11.917077 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.11s 2025-06-19 10:12:11.917088 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.70s 2025-06-19 10:12:11.917098 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.54s 2025-06-19 10:12:11.917109 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.43s 2025-06-19 10:12:11.917128 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.36s 2025-06-19 10:12:12.365590 | orchestrator | osism.services.docker : Add repository ---------------------------------- 6.97s 2025-06-19 10:12:12.365698 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 5.96s 2025-06-19 10:12:12.365713 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.72s 2025-06-19 10:12:12.365725 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.69s 2025-06-19 10:12:12.365763 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.56s 2025-06-19 10:12:12.599721 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-19 10:12:12.599811 | orchestrator | + osism apply network 2025-06-19 10:12:14.677963 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:12:14.678115 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:12:14.678131 | orchestrator | Registering Redlock._release_script 2025-06-19 10:12:14.763780 | orchestrator | 2025-06-19 10:12:14 | INFO  | Task 72fa2d77-b091-477b-8ad1-9cae692b66c9 (network) was prepared for execution. 2025-06-19 10:12:14.763871 | orchestrator | 2025-06-19 10:12:14 | INFO  | It takes a moment until task 72fa2d77-b091-477b-8ad1-9cae692b66c9 (network) has been started and output is visible here. 2025-06-19 10:12:43.221064 | orchestrator | 2025-06-19 10:12:43.221181 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-19 10:12:43.221199 | orchestrator | 2025-06-19 10:12:43.221212 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-19 10:12:43.221223 | orchestrator | Thursday 19 June 2025 10:12:18 +0000 (0:00:00.288) 0:00:00.288 ********* 2025-06-19 10:12:43.221236 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:43.221249 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:43.221260 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:43.221270 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:43.221281 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:43.221292 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:43.221303 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:43.221314 | orchestrator | 2025-06-19 10:12:43.221325 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-19 10:12:43.221336 | orchestrator | Thursday 19 June 2025 10:12:19 +0000 (0:00:00.725) 0:00:01.013 ********* 2025-06-19 10:12:43.221348 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:12:43.221362 | orchestrator | 2025-06-19 10:12:43.221422 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-19 10:12:43.221435 | orchestrator | Thursday 19 June 2025 10:12:20 +0000 (0:00:01.223) 0:00:02.237 ********* 2025-06-19 10:12:43.221446 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:43.221457 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:43.221468 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:43.221479 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:43.221489 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:43.221500 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:43.221511 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:43.221521 | orchestrator | 2025-06-19 10:12:43.221532 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-19 10:12:43.221543 | orchestrator | Thursday 19 June 2025 10:12:23 +0000 (0:00:02.104) 0:00:04.342 ********* 2025-06-19 10:12:43.221554 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:43.221565 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:43.221576 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:43.221588 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:43.221600 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:43.221611 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:43.221625 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:43.221644 | orchestrator | 2025-06-19 10:12:43.221661 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-19 10:12:43.221681 | orchestrator | Thursday 19 June 2025 10:12:24 +0000 (0:00:01.681) 0:00:06.023 ********* 2025-06-19 10:12:43.221696 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-19 10:12:43.221709 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-19 10:12:43.221722 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-19 10:12:43.221734 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-19 10:12:43.221770 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-19 10:12:43.221782 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-19 10:12:43.221795 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-19 10:12:43.221807 | orchestrator | 2025-06-19 10:12:43.221819 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-19 10:12:43.221831 | orchestrator | Thursday 19 June 2025 10:12:25 +0000 (0:00:01.007) 0:00:07.030 ********* 2025-06-19 10:12:43.221843 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-19 10:12:43.221856 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:12:43.221868 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:12:43.221895 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-19 10:12:43.221908 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-19 10:12:43.221920 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-19 10:12:43.221932 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-19 10:12:43.221943 | orchestrator | 2025-06-19 10:12:43.221954 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-19 10:12:43.221970 | orchestrator | Thursday 19 June 2025 10:12:29 +0000 (0:00:03.337) 0:00:10.368 ********* 2025-06-19 10:12:43.221987 | orchestrator | changed: [testbed-manager] 2025-06-19 10:12:43.222005 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:12:43.222089 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:12:43.222103 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:12:43.222113 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:12:43.222124 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:12:43.222135 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:12:43.222145 | orchestrator | 2025-06-19 10:12:43.222156 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-19 10:12:43.222167 | orchestrator | Thursday 19 June 2025 10:12:30 +0000 (0:00:01.494) 0:00:11.863 ********* 2025-06-19 10:12:43.222178 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:12:43.222188 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:12:43.222199 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-19 10:12:43.222209 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-19 10:12:43.222220 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-19 10:12:43.222231 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-19 10:12:43.222241 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-19 10:12:43.222252 | orchestrator | 2025-06-19 10:12:43.222262 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-19 10:12:43.222273 | orchestrator | Thursday 19 June 2025 10:12:32 +0000 (0:00:01.888) 0:00:13.751 ********* 2025-06-19 10:12:43.222284 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:43.222294 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:43.222305 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:43.222316 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:43.222326 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:43.222337 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:43.222347 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:43.222358 | orchestrator | 2025-06-19 10:12:43.222369 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-19 10:12:43.222423 | orchestrator | Thursday 19 June 2025 10:12:33 +0000 (0:00:01.129) 0:00:14.881 ********* 2025-06-19 10:12:43.222435 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:12:43.222446 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:12:43.222457 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:12:43.222467 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:12:43.222478 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:12:43.222488 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:12:43.222498 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:12:43.222509 | orchestrator | 2025-06-19 10:12:43.222520 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-19 10:12:43.222531 | orchestrator | Thursday 19 June 2025 10:12:34 +0000 (0:00:00.651) 0:00:15.532 ********* 2025-06-19 10:12:43.222553 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:43.222564 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:43.222575 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:43.222586 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:43.222596 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:43.222606 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:43.222617 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:43.222627 | orchestrator | 2025-06-19 10:12:43.222638 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-19 10:12:43.222649 | orchestrator | Thursday 19 June 2025 10:12:36 +0000 (0:00:02.312) 0:00:17.845 ********* 2025-06-19 10:12:43.222659 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:12:43.222670 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:12:43.222680 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:12:43.222691 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:12:43.222701 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:12:43.222712 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:12:43.222723 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-19 10:12:43.222734 | orchestrator | 2025-06-19 10:12:43.222745 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-19 10:12:43.222755 | orchestrator | Thursday 19 June 2025 10:12:37 +0000 (0:00:00.815) 0:00:18.661 ********* 2025-06-19 10:12:43.222766 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:43.222776 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:12:43.222787 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:12:43.222797 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:12:43.222808 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:12:43.222818 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:12:43.222829 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:12:43.222839 | orchestrator | 2025-06-19 10:12:43.222850 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-19 10:12:43.222860 | orchestrator | Thursday 19 June 2025 10:12:38 +0000 (0:00:01.649) 0:00:20.311 ********* 2025-06-19 10:12:43.222872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:12:43.222884 | orchestrator | 2025-06-19 10:12:43.222895 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-19 10:12:43.222906 | orchestrator | Thursday 19 June 2025 10:12:40 +0000 (0:00:01.264) 0:00:21.575 ********* 2025-06-19 10:12:43.222916 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:43.222927 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:43.222937 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:43.222948 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:43.222958 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:43.222969 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:43.222979 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:43.222990 | orchestrator | 2025-06-19 10:12:43.223000 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-19 10:12:43.223017 | orchestrator | Thursday 19 June 2025 10:12:41 +0000 (0:00:00.983) 0:00:22.559 ********* 2025-06-19 10:12:43.223028 | orchestrator | ok: [testbed-manager] 2025-06-19 10:12:43.223039 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:12:43.223050 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:12:43.223060 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:12:43.223070 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:12:43.223081 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:12:43.223091 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:12:43.223102 | orchestrator | 2025-06-19 10:12:43.223112 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-19 10:12:43.223123 | orchestrator | Thursday 19 June 2025 10:12:42 +0000 (0:00:00.791) 0:00:23.351 ********* 2025-06-19 10:12:43.223141 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-19 10:12:43.223151 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-19 10:12:43.223162 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-19 10:12:43.223172 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-19 10:12:43.223183 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-19 10:12:43.223194 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-19 10:12:43.223204 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-19 10:12:43.223215 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-19 10:12:43.223225 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-19 10:12:43.223236 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-19 10:12:43.223246 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-19 10:12:43.223257 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-19 10:12:43.223267 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-19 10:12:43.223278 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-19 10:12:43.223288 | orchestrator | 2025-06-19 10:12:43.223306 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-19 10:12:59.774898 | orchestrator | Thursday 19 June 2025 10:12:43 +0000 (0:00:01.179) 0:00:24.530 ********* 2025-06-19 10:12:59.775022 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:12:59.775039 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:12:59.775051 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:12:59.775063 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:12:59.775075 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:12:59.775086 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:12:59.775097 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:12:59.775108 | orchestrator | 2025-06-19 10:12:59.775120 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-19 10:12:59.775131 | orchestrator | Thursday 19 June 2025 10:12:43 +0000 (0:00:00.652) 0:00:25.182 ********* 2025-06-19 10:12:59.775144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-3, testbed-node-2, testbed-node-5 2025-06-19 10:12:59.775158 | orchestrator | 2025-06-19 10:12:59.775169 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-19 10:12:59.775180 | orchestrator | Thursday 19 June 2025 10:12:48 +0000 (0:00:04.521) 0:00:29.704 ********* 2025-06-19 10:12:59.775193 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775219 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775277 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775288 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775299 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775424 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775436 | orchestrator | 2025-06-19 10:12:59.775449 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-19 10:12:59.775472 | orchestrator | Thursday 19 June 2025 10:12:53 +0000 (0:00:05.441) 0:00:35.145 ********* 2025-06-19 10:12:59.775485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775507 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775549 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775576 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-19 10:12:59.775609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:12:59.775642 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:13:06.137306 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-19 10:13:06.137442 | orchestrator | 2025-06-19 10:13:06.137462 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-19 10:13:06.137475 | orchestrator | Thursday 19 June 2025 10:12:59 +0000 (0:00:05.938) 0:00:41.083 ********* 2025-06-19 10:13:06.137488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:13:06.137500 | orchestrator | 2025-06-19 10:13:06.137511 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-19 10:13:06.137546 | orchestrator | Thursday 19 June 2025 10:13:01 +0000 (0:00:01.273) 0:00:42.357 ********* 2025-06-19 10:13:06.137558 | orchestrator | ok: [testbed-manager] 2025-06-19 10:13:06.137570 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:13:06.137580 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:13:06.137591 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:13:06.137602 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:13:06.137612 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:13:06.137623 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:13:06.137633 | orchestrator | 2025-06-19 10:13:06.137644 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-19 10:13:06.137655 | orchestrator | Thursday 19 June 2025 10:13:02 +0000 (0:00:01.168) 0:00:43.526 ********* 2025-06-19 10:13:06.137666 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-19 10:13:06.137696 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-19 10:13:06.137706 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-19 10:13:06.137717 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-19 10:13:06.137728 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:13:06.137739 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-19 10:13:06.137750 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-19 10:13:06.137760 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-19 10:13:06.137771 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-19 10:13:06.137782 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:13:06.137792 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-19 10:13:06.137803 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-19 10:13:06.137814 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-19 10:13:06.137838 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-19 10:13:06.137851 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:13:06.137864 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-19 10:13:06.137876 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-19 10:13:06.137888 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-19 10:13:06.137907 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-19 10:13:06.137926 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:13:06.137943 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-19 10:13:06.137962 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-19 10:13:06.137983 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-19 10:13:06.137999 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-19 10:13:06.138010 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:13:06.138074 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-19 10:13:06.138088 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-19 10:13:06.138100 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-19 10:13:06.138112 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-19 10:13:06.138124 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:13:06.138137 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-19 10:13:06.138159 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-19 10:13:06.138171 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-19 10:13:06.138183 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-19 10:13:06.138196 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:13:06.138208 | orchestrator | 2025-06-19 10:13:06.138219 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-19 10:13:06.138247 | orchestrator | Thursday 19 June 2025 10:13:04 +0000 (0:00:02.132) 0:00:45.658 ********* 2025-06-19 10:13:06.138259 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:13:06.138269 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:13:06.138280 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:13:06.138291 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:13:06.138301 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:13:06.138312 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:13:06.138322 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:13:06.138333 | orchestrator | 2025-06-19 10:13:06.138344 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-19 10:13:06.138354 | orchestrator | Thursday 19 June 2025 10:13:04 +0000 (0:00:00.666) 0:00:46.325 ********* 2025-06-19 10:13:06.138365 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:13:06.138376 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:13:06.138386 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:13:06.138397 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:13:06.138432 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:13:06.138443 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:13:06.138454 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:13:06.138464 | orchestrator | 2025-06-19 10:13:06.138475 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:13:06.138487 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:13:06.138500 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:13:06.138511 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:13:06.138522 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:13:06.138533 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:13:06.138544 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:13:06.138555 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:13:06.138566 | orchestrator | 2025-06-19 10:13:06.138576 | orchestrator | 2025-06-19 10:13:06.138587 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:13:06.138598 | orchestrator | Thursday 19 June 2025 10:13:05 +0000 (0:00:00.721) 0:00:47.046 ********* 2025-06-19 10:13:06.138609 | orchestrator | =============================================================================== 2025-06-19 10:13:06.138620 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.94s 2025-06-19 10:13:06.138637 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.44s 2025-06-19 10:13:06.138648 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.52s 2025-06-19 10:13:06.138659 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.34s 2025-06-19 10:13:06.138677 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.31s 2025-06-19 10:13:06.138688 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.13s 2025-06-19 10:13:06.138699 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.10s 2025-06-19 10:13:06.138710 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.89s 2025-06-19 10:13:06.138720 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.68s 2025-06-19 10:13:06.138731 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.65s 2025-06-19 10:13:06.138742 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.49s 2025-06-19 10:13:06.138753 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.27s 2025-06-19 10:13:06.138763 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.26s 2025-06-19 10:13:06.138774 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-06-19 10:13:06.138785 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.18s 2025-06-19 10:13:06.138795 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.17s 2025-06-19 10:13:06.138806 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.13s 2025-06-19 10:13:06.138817 | orchestrator | osism.commons.network : Create required directories --------------------- 1.01s 2025-06-19 10:13:06.138827 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2025-06-19 10:13:06.138838 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.82s 2025-06-19 10:13:06.364226 | orchestrator | + osism apply wireguard 2025-06-19 10:13:07.992241 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:13:07.992336 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:13:07.992351 | orchestrator | Registering Redlock._release_script 2025-06-19 10:13:08.057129 | orchestrator | 2025-06-19 10:13:08 | INFO  | Task 7bc83263-a74e-427d-8734-78c1e79bf88a (wireguard) was prepared for execution. 2025-06-19 10:13:08.057242 | orchestrator | 2025-06-19 10:13:08 | INFO  | It takes a moment until task 7bc83263-a74e-427d-8734-78c1e79bf88a (wireguard) has been started and output is visible here. 2025-06-19 10:13:27.061848 | orchestrator | 2025-06-19 10:13:27.061994 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-19 10:13:27.062072 | orchestrator | 2025-06-19 10:13:27.062087 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-19 10:13:27.062098 | orchestrator | Thursday 19 June 2025 10:13:12 +0000 (0:00:00.232) 0:00:00.232 ********* 2025-06-19 10:13:27.062110 | orchestrator | ok: [testbed-manager] 2025-06-19 10:13:27.062123 | orchestrator | 2025-06-19 10:13:27.062134 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-19 10:13:27.062145 | orchestrator | Thursday 19 June 2025 10:13:13 +0000 (0:00:01.422) 0:00:01.655 ********* 2025-06-19 10:13:27.062156 | orchestrator | changed: [testbed-manager] 2025-06-19 10:13:27.062168 | orchestrator | 2025-06-19 10:13:27.062179 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-19 10:13:27.062190 | orchestrator | Thursday 19 June 2025 10:13:20 +0000 (0:00:06.524) 0:00:08.179 ********* 2025-06-19 10:13:27.062201 | orchestrator | changed: [testbed-manager] 2025-06-19 10:13:27.062213 | orchestrator | 2025-06-19 10:13:27.062224 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-19 10:13:27.062235 | orchestrator | Thursday 19 June 2025 10:13:20 +0000 (0:00:00.538) 0:00:08.718 ********* 2025-06-19 10:13:27.062246 | orchestrator | changed: [testbed-manager] 2025-06-19 10:13:27.062257 | orchestrator | 2025-06-19 10:13:27.062268 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-19 10:13:27.062279 | orchestrator | Thursday 19 June 2025 10:13:20 +0000 (0:00:00.438) 0:00:09.156 ********* 2025-06-19 10:13:27.062316 | orchestrator | ok: [testbed-manager] 2025-06-19 10:13:27.062328 | orchestrator | 2025-06-19 10:13:27.062339 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-19 10:13:27.062349 | orchestrator | Thursday 19 June 2025 10:13:21 +0000 (0:00:00.524) 0:00:09.681 ********* 2025-06-19 10:13:27.062360 | orchestrator | ok: [testbed-manager] 2025-06-19 10:13:27.062371 | orchestrator | 2025-06-19 10:13:27.062382 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-19 10:13:27.062394 | orchestrator | Thursday 19 June 2025 10:13:21 +0000 (0:00:00.480) 0:00:10.162 ********* 2025-06-19 10:13:27.062406 | orchestrator | ok: [testbed-manager] 2025-06-19 10:13:27.062418 | orchestrator | 2025-06-19 10:13:27.062456 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-19 10:13:27.062469 | orchestrator | Thursday 19 June 2025 10:13:22 +0000 (0:00:00.369) 0:00:10.532 ********* 2025-06-19 10:13:27.062481 | orchestrator | changed: [testbed-manager] 2025-06-19 10:13:27.062494 | orchestrator | 2025-06-19 10:13:27.062506 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-19 10:13:27.062519 | orchestrator | Thursday 19 June 2025 10:13:23 +0000 (0:00:01.113) 0:00:11.645 ********* 2025-06-19 10:13:27.062531 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-19 10:13:27.062544 | orchestrator | changed: [testbed-manager] 2025-06-19 10:13:27.062557 | orchestrator | 2025-06-19 10:13:27.062569 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-19 10:13:27.062582 | orchestrator | Thursday 19 June 2025 10:13:24 +0000 (0:00:00.830) 0:00:12.476 ********* 2025-06-19 10:13:27.062594 | orchestrator | changed: [testbed-manager] 2025-06-19 10:13:27.062607 | orchestrator | 2025-06-19 10:13:27.062633 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-19 10:13:27.062646 | orchestrator | Thursday 19 June 2025 10:13:25 +0000 (0:00:01.540) 0:00:14.017 ********* 2025-06-19 10:13:27.062659 | orchestrator | changed: [testbed-manager] 2025-06-19 10:13:27.062671 | orchestrator | 2025-06-19 10:13:27.062684 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:13:27.062696 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:13:27.062710 | orchestrator | 2025-06-19 10:13:27.062723 | orchestrator | 2025-06-19 10:13:27.062736 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:13:27.062748 | orchestrator | Thursday 19 June 2025 10:13:26 +0000 (0:00:00.899) 0:00:14.916 ********* 2025-06-19 10:13:27.062758 | orchestrator | =============================================================================== 2025-06-19 10:13:27.062769 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.52s 2025-06-19 10:13:27.062780 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.54s 2025-06-19 10:13:27.062791 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.42s 2025-06-19 10:13:27.062810 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.11s 2025-06-19 10:13:27.062828 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2025-06-19 10:13:27.062845 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.83s 2025-06-19 10:13:27.062863 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-06-19 10:13:27.062879 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-06-19 10:13:27.062896 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.48s 2025-06-19 10:13:27.062915 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2025-06-19 10:13:27.062933 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.37s 2025-06-19 10:13:27.289885 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-19 10:13:27.324862 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-19 10:13:27.324977 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-19 10:13:27.411946 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 160 0 --:--:-- --:--:-- --:--:-- 160 2025-06-19 10:13:27.425523 | orchestrator | + osism apply --environment custom workarounds 2025-06-19 10:13:29.170072 | orchestrator | 2025-06-19 10:13:29 | INFO  | Trying to run play workarounds in environment custom 2025-06-19 10:13:29.174712 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:13:29.175051 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:13:29.175072 | orchestrator | Registering Redlock._release_script 2025-06-19 10:13:29.236980 | orchestrator | 2025-06-19 10:13:29 | INFO  | Task 004014a1-5bbe-4992-bdd5-e0a7651b0651 (workarounds) was prepared for execution. 2025-06-19 10:13:29.237037 | orchestrator | 2025-06-19 10:13:29 | INFO  | It takes a moment until task 004014a1-5bbe-4992-bdd5-e0a7651b0651 (workarounds) has been started and output is visible here. 2025-06-19 10:13:53.562722 | orchestrator | 2025-06-19 10:13:53.562833 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:13:53.562849 | orchestrator | 2025-06-19 10:13:53.562862 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-19 10:13:53.562873 | orchestrator | Thursday 19 June 2025 10:13:33 +0000 (0:00:00.150) 0:00:00.150 ********* 2025-06-19 10:13:53.562885 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-19 10:13:53.562896 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-19 10:13:53.562908 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-19 10:13:53.562919 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-19 10:13:53.562930 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-19 10:13:53.562941 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-19 10:13:53.562952 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-19 10:13:53.562963 | orchestrator | 2025-06-19 10:13:53.562975 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-19 10:13:53.562986 | orchestrator | 2025-06-19 10:13:53.562996 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-19 10:13:53.563008 | orchestrator | Thursday 19 June 2025 10:13:33 +0000 (0:00:00.789) 0:00:00.940 ********* 2025-06-19 10:13:53.563019 | orchestrator | ok: [testbed-manager] 2025-06-19 10:13:53.563032 | orchestrator | 2025-06-19 10:13:53.563043 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-19 10:13:53.563054 | orchestrator | 2025-06-19 10:13:53.563066 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-19 10:13:53.563077 | orchestrator | Thursday 19 June 2025 10:13:36 +0000 (0:00:02.223) 0:00:03.164 ********* 2025-06-19 10:13:53.563088 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:13:53.563099 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:13:53.563110 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:13:53.563120 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:13:53.563131 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:13:53.563141 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:13:53.563152 | orchestrator | 2025-06-19 10:13:53.563163 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-19 10:13:53.563174 | orchestrator | 2025-06-19 10:13:53.563193 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-19 10:13:53.563204 | orchestrator | Thursday 19 June 2025 10:13:37 +0000 (0:00:01.757) 0:00:04.921 ********* 2025-06-19 10:13:53.563215 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-19 10:13:53.563227 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-19 10:13:53.563255 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-19 10:13:53.563268 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-19 10:13:53.563281 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-19 10:13:53.563292 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-19 10:13:53.563305 | orchestrator | 2025-06-19 10:13:53.563317 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-19 10:13:53.563329 | orchestrator | Thursday 19 June 2025 10:13:39 +0000 (0:00:01.496) 0:00:06.418 ********* 2025-06-19 10:13:53.563341 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:13:53.563353 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:13:53.563365 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:13:53.563377 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:13:53.563389 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:13:53.563401 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:13:53.563413 | orchestrator | 2025-06-19 10:13:53.563425 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-19 10:13:53.563437 | orchestrator | Thursday 19 June 2025 10:13:43 +0000 (0:00:03.703) 0:00:10.121 ********* 2025-06-19 10:13:53.563449 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:13:53.563487 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:13:53.563500 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:13:53.563512 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:13:53.563524 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:13:53.563536 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:13:53.563548 | orchestrator | 2025-06-19 10:13:53.563560 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-19 10:13:53.563572 | orchestrator | 2025-06-19 10:13:53.563584 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-19 10:13:53.563597 | orchestrator | Thursday 19 June 2025 10:13:43 +0000 (0:00:00.668) 0:00:10.790 ********* 2025-06-19 10:13:53.563608 | orchestrator | changed: [testbed-manager] 2025-06-19 10:13:53.563621 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:13:53.563633 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:13:53.563643 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:13:53.563654 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:13:53.563665 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:13:53.563675 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:13:53.563686 | orchestrator | 2025-06-19 10:13:53.563697 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-19 10:13:53.563708 | orchestrator | Thursday 19 June 2025 10:13:45 +0000 (0:00:01.647) 0:00:12.437 ********* 2025-06-19 10:13:53.563719 | orchestrator | changed: [testbed-manager] 2025-06-19 10:13:53.563730 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:13:53.563741 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:13:53.563752 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:13:53.563763 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:13:53.563774 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:13:53.563803 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:13:53.563816 | orchestrator | 2025-06-19 10:13:53.563827 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-19 10:13:53.563838 | orchestrator | Thursday 19 June 2025 10:13:47 +0000 (0:00:01.606) 0:00:14.043 ********* 2025-06-19 10:13:53.563849 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:13:53.563860 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:13:53.563871 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:13:53.563882 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:13:53.563893 | orchestrator | ok: [testbed-manager] 2025-06-19 10:13:53.563904 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:13:53.563923 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:13:53.563934 | orchestrator | 2025-06-19 10:13:53.563945 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-19 10:13:53.563957 | orchestrator | Thursday 19 June 2025 10:13:48 +0000 (0:00:01.508) 0:00:15.552 ********* 2025-06-19 10:13:53.563968 | orchestrator | changed: [testbed-manager] 2025-06-19 10:13:53.563979 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:13:53.563990 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:13:53.564001 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:13:53.564012 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:13:53.564023 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:13:53.564034 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:13:53.564046 | orchestrator | 2025-06-19 10:13:53.564057 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-19 10:13:53.564068 | orchestrator | Thursday 19 June 2025 10:13:50 +0000 (0:00:01.743) 0:00:17.295 ********* 2025-06-19 10:13:53.564079 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:13:53.564090 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:13:53.564102 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:13:53.564113 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:13:53.564124 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:13:53.564135 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:13:53.564146 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:13:53.564157 | orchestrator | 2025-06-19 10:13:53.564168 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-19 10:13:53.564180 | orchestrator | 2025-06-19 10:13:53.564191 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-19 10:13:53.564202 | orchestrator | Thursday 19 June 2025 10:13:50 +0000 (0:00:00.614) 0:00:17.909 ********* 2025-06-19 10:13:53.564213 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:13:53.564224 | orchestrator | ok: [testbed-manager] 2025-06-19 10:13:53.564236 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:13:53.564247 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:13:53.564258 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:13:53.564270 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:13:53.564281 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:13:53.564292 | orchestrator | 2025-06-19 10:13:53.564304 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:13:53.564316 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:13:53.564329 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:13:53.564340 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:13:53.564352 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:13:53.564363 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:13:53.564374 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:13:53.564386 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:13:53.564397 | orchestrator | 2025-06-19 10:13:53.564408 | orchestrator | 2025-06-19 10:13:53.564419 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:13:53.564431 | orchestrator | Thursday 19 June 2025 10:13:53 +0000 (0:00:02.623) 0:00:20.533 ********* 2025-06-19 10:13:53.564442 | orchestrator | =============================================================================== 2025-06-19 10:13:53.564474 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.70s 2025-06-19 10:13:53.564486 | orchestrator | Install python3-docker -------------------------------------------------- 2.62s 2025-06-19 10:13:53.564498 | orchestrator | Apply netplan configuration --------------------------------------------- 2.22s 2025-06-19 10:13:53.564509 | orchestrator | Apply netplan configuration --------------------------------------------- 1.76s 2025-06-19 10:13:53.564520 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.74s 2025-06-19 10:13:53.564531 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2025-06-19 10:13:53.564543 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.61s 2025-06-19 10:13:53.564554 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.51s 2025-06-19 10:13:53.564565 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.50s 2025-06-19 10:13:53.564576 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.79s 2025-06-19 10:13:53.564588 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.67s 2025-06-19 10:13:53.564606 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-06-19 10:13:54.132418 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-19 10:13:55.759256 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:13:55.759362 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:13:55.759377 | orchestrator | Registering Redlock._release_script 2025-06-19 10:13:55.821074 | orchestrator | 2025-06-19 10:13:55 | INFO  | Task 0dc7361f-65a9-4c4f-a0f5-48027601c16c (reboot) was prepared for execution. 2025-06-19 10:13:55.821159 | orchestrator | 2025-06-19 10:13:55 | INFO  | It takes a moment until task 0dc7361f-65a9-4c4f-a0f5-48027601c16c (reboot) has been started and output is visible here. 2025-06-19 10:14:05.671188 | orchestrator | 2025-06-19 10:14:05.671306 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-19 10:14:05.671323 | orchestrator | 2025-06-19 10:14:05.671336 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-19 10:14:05.671348 | orchestrator | Thursday 19 June 2025 10:13:59 +0000 (0:00:00.206) 0:00:00.206 ********* 2025-06-19 10:14:05.671359 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:14:05.671371 | orchestrator | 2025-06-19 10:14:05.671381 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-19 10:14:05.671393 | orchestrator | Thursday 19 June 2025 10:13:59 +0000 (0:00:00.097) 0:00:00.304 ********* 2025-06-19 10:14:05.671404 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:14:05.671414 | orchestrator | 2025-06-19 10:14:05.671425 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-19 10:14:05.671436 | orchestrator | Thursday 19 June 2025 10:14:00 +0000 (0:00:00.937) 0:00:01.242 ********* 2025-06-19 10:14:05.671447 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:14:05.671457 | orchestrator | 2025-06-19 10:14:05.671468 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-19 10:14:05.671561 | orchestrator | 2025-06-19 10:14:05.671573 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-19 10:14:05.671584 | orchestrator | Thursday 19 June 2025 10:14:00 +0000 (0:00:00.120) 0:00:01.362 ********* 2025-06-19 10:14:05.671615 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:14:05.671627 | orchestrator | 2025-06-19 10:14:05.671638 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-19 10:14:05.671654 | orchestrator | Thursday 19 June 2025 10:14:01 +0000 (0:00:00.100) 0:00:01.463 ********* 2025-06-19 10:14:05.671665 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:14:05.671676 | orchestrator | 2025-06-19 10:14:05.671687 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-19 10:14:05.671698 | orchestrator | Thursday 19 June 2025 10:14:01 +0000 (0:00:00.661) 0:00:02.124 ********* 2025-06-19 10:14:05.671733 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:14:05.671747 | orchestrator | 2025-06-19 10:14:05.671760 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-19 10:14:05.671772 | orchestrator | 2025-06-19 10:14:05.671784 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-19 10:14:05.671797 | orchestrator | Thursday 19 June 2025 10:14:01 +0000 (0:00:00.116) 0:00:02.241 ********* 2025-06-19 10:14:05.671810 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:14:05.671822 | orchestrator | 2025-06-19 10:14:05.671834 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-19 10:14:05.671847 | orchestrator | Thursday 19 June 2025 10:14:01 +0000 (0:00:00.204) 0:00:02.445 ********* 2025-06-19 10:14:05.671860 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:14:05.671872 | orchestrator | 2025-06-19 10:14:05.671884 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-19 10:14:05.671896 | orchestrator | Thursday 19 June 2025 10:14:02 +0000 (0:00:00.645) 0:00:03.091 ********* 2025-06-19 10:14:05.671909 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:14:05.671921 | orchestrator | 2025-06-19 10:14:05.671934 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-19 10:14:05.671946 | orchestrator | 2025-06-19 10:14:05.671958 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-19 10:14:05.671971 | orchestrator | Thursday 19 June 2025 10:14:02 +0000 (0:00:00.121) 0:00:03.213 ********* 2025-06-19 10:14:05.671983 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:14:05.671995 | orchestrator | 2025-06-19 10:14:05.672007 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-19 10:14:05.672020 | orchestrator | Thursday 19 June 2025 10:14:02 +0000 (0:00:00.100) 0:00:03.313 ********* 2025-06-19 10:14:05.672032 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:14:05.672045 | orchestrator | 2025-06-19 10:14:05.672057 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-19 10:14:05.672070 | orchestrator | Thursday 19 June 2025 10:14:03 +0000 (0:00:00.668) 0:00:03.982 ********* 2025-06-19 10:14:05.672082 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:14:05.672094 | orchestrator | 2025-06-19 10:14:05.672105 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-19 10:14:05.672116 | orchestrator | 2025-06-19 10:14:05.672126 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-19 10:14:05.672137 | orchestrator | Thursday 19 June 2025 10:14:03 +0000 (0:00:00.117) 0:00:04.100 ********* 2025-06-19 10:14:05.672148 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:14:05.672158 | orchestrator | 2025-06-19 10:14:05.672169 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-19 10:14:05.672180 | orchestrator | Thursday 19 June 2025 10:14:03 +0000 (0:00:00.106) 0:00:04.206 ********* 2025-06-19 10:14:05.672190 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:14:05.672201 | orchestrator | 2025-06-19 10:14:05.672212 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-19 10:14:05.672223 | orchestrator | Thursday 19 June 2025 10:14:04 +0000 (0:00:00.661) 0:00:04.868 ********* 2025-06-19 10:14:05.672233 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:14:05.672244 | orchestrator | 2025-06-19 10:14:05.672255 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-19 10:14:05.672265 | orchestrator | 2025-06-19 10:14:05.672276 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-19 10:14:05.672287 | orchestrator | Thursday 19 June 2025 10:14:04 +0000 (0:00:00.122) 0:00:04.991 ********* 2025-06-19 10:14:05.672297 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:14:05.672308 | orchestrator | 2025-06-19 10:14:05.672319 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-19 10:14:05.672329 | orchestrator | Thursday 19 June 2025 10:14:04 +0000 (0:00:00.099) 0:00:05.090 ********* 2025-06-19 10:14:05.672348 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:14:05.672359 | orchestrator | 2025-06-19 10:14:05.672370 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-19 10:14:05.672381 | orchestrator | Thursday 19 June 2025 10:14:05 +0000 (0:00:00.683) 0:00:05.774 ********* 2025-06-19 10:14:05.672410 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:14:05.672422 | orchestrator | 2025-06-19 10:14:05.672433 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:14:05.672449 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:14:05.672461 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:14:05.672499 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:14:05.672512 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:14:05.672523 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:14:05.672534 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:14:05.672545 | orchestrator | 2025-06-19 10:14:05.672556 | orchestrator | 2025-06-19 10:14:05.672572 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:14:05.672584 | orchestrator | Thursday 19 June 2025 10:14:05 +0000 (0:00:00.034) 0:00:05.808 ********* 2025-06-19 10:14:05.672595 | orchestrator | =============================================================================== 2025-06-19 10:14:05.672606 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.26s 2025-06-19 10:14:05.672616 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.71s 2025-06-19 10:14:05.672627 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-06-19 10:14:05.909291 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-19 10:14:07.574461 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:14:07.574647 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:14:07.574665 | orchestrator | Registering Redlock._release_script 2025-06-19 10:14:07.634429 | orchestrator | 2025-06-19 10:14:07 | INFO  | Task 989f960d-d80a-471a-88ed-d586b3666934 (wait-for-connection) was prepared for execution. 2025-06-19 10:14:07.634592 | orchestrator | 2025-06-19 10:14:07 | INFO  | It takes a moment until task 989f960d-d80a-471a-88ed-d586b3666934 (wait-for-connection) has been started and output is visible here. 2025-06-19 10:14:24.750248 | orchestrator | 2025-06-19 10:14:24.750394 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-19 10:14:24.750421 | orchestrator | 2025-06-19 10:14:24.750433 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-19 10:14:24.750445 | orchestrator | Thursday 19 June 2025 10:14:11 +0000 (0:00:00.244) 0:00:00.244 ********* 2025-06-19 10:14:24.750456 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:14:24.750469 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:14:24.750480 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:14:24.750539 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:14:24.750554 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:14:24.750565 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:14:24.750576 | orchestrator | 2025-06-19 10:14:24.750587 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:14:24.750600 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:14:24.750639 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:14:24.750652 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:14:24.750663 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:14:24.750674 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:14:24.750684 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:14:24.750695 | orchestrator | 2025-06-19 10:14:24.750706 | orchestrator | 2025-06-19 10:14:24.750717 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:14:24.750728 | orchestrator | Thursday 19 June 2025 10:14:24 +0000 (0:00:12.917) 0:00:13.161 ********* 2025-06-19 10:14:24.750738 | orchestrator | =============================================================================== 2025-06-19 10:14:24.750749 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.92s 2025-06-19 10:14:24.974331 | orchestrator | + osism apply hddtemp 2025-06-19 10:14:26.553060 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:14:26.553157 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:14:26.553172 | orchestrator | Registering Redlock._release_script 2025-06-19 10:14:26.609873 | orchestrator | 2025-06-19 10:14:26 | INFO  | Task 6f434fb4-307f-4d8f-ad28-5a11a51e8170 (hddtemp) was prepared for execution. 2025-06-19 10:14:26.609957 | orchestrator | 2025-06-19 10:14:26 | INFO  | It takes a moment until task 6f434fb4-307f-4d8f-ad28-5a11a51e8170 (hddtemp) has been started and output is visible here. 2025-06-19 10:14:53.644244 | orchestrator | 2025-06-19 10:14:53.644379 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-19 10:14:53.644406 | orchestrator | 2025-06-19 10:14:53.644426 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-19 10:14:53.644444 | orchestrator | Thursday 19 June 2025 10:14:30 +0000 (0:00:00.227) 0:00:00.227 ********* 2025-06-19 10:14:53.644463 | orchestrator | ok: [testbed-manager] 2025-06-19 10:14:53.644484 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:14:53.644502 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:14:53.644518 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:14:53.644556 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:14:53.644567 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:14:53.644578 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:14:53.644589 | orchestrator | 2025-06-19 10:14:53.644600 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-19 10:14:53.644611 | orchestrator | Thursday 19 June 2025 10:14:31 +0000 (0:00:00.563) 0:00:00.791 ********* 2025-06-19 10:14:53.644624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:14:53.644638 | orchestrator | 2025-06-19 10:14:53.644649 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-19 10:14:53.644672 | orchestrator | Thursday 19 June 2025 10:14:32 +0000 (0:00:00.979) 0:00:01.771 ********* 2025-06-19 10:14:53.644689 | orchestrator | ok: [testbed-manager] 2025-06-19 10:14:53.644708 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:14:53.644725 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:14:53.644744 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:14:53.644762 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:14:53.644781 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:14:53.644798 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:14:53.644817 | orchestrator | 2025-06-19 10:14:53.644863 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-19 10:14:53.644883 | orchestrator | Thursday 19 June 2025 10:14:34 +0000 (0:00:01.987) 0:00:03.758 ********* 2025-06-19 10:14:53.644901 | orchestrator | changed: [testbed-manager] 2025-06-19 10:14:53.644921 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:14:53.644940 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:14:53.644958 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:14:53.644976 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:14:53.644995 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:14:53.645013 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:14:53.645031 | orchestrator | 2025-06-19 10:14:53.645050 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-19 10:14:53.645069 | orchestrator | Thursday 19 June 2025 10:14:35 +0000 (0:00:01.015) 0:00:04.774 ********* 2025-06-19 10:14:53.645082 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:14:53.645093 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:14:53.645104 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:14:53.645114 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:14:53.645125 | orchestrator | ok: [testbed-manager] 2025-06-19 10:14:53.645136 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:14:53.645146 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:14:53.645157 | orchestrator | 2025-06-19 10:14:53.645168 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-19 10:14:53.645179 | orchestrator | Thursday 19 June 2025 10:14:36 +0000 (0:00:01.209) 0:00:05.984 ********* 2025-06-19 10:14:53.645189 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:14:53.645200 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:14:53.645210 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:14:53.645221 | orchestrator | changed: [testbed-manager] 2025-06-19 10:14:53.645232 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:14:53.645242 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:14:53.645252 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:14:53.645263 | orchestrator | 2025-06-19 10:14:53.645274 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-19 10:14:53.645284 | orchestrator | Thursday 19 June 2025 10:14:37 +0000 (0:00:00.844) 0:00:06.828 ********* 2025-06-19 10:14:53.645295 | orchestrator | changed: [testbed-manager] 2025-06-19 10:14:53.645305 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:14:53.645316 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:14:53.645326 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:14:53.645337 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:14:53.645347 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:14:53.645358 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:14:53.645369 | orchestrator | 2025-06-19 10:14:53.645380 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-19 10:14:53.645391 | orchestrator | Thursday 19 June 2025 10:14:49 +0000 (0:00:12.882) 0:00:19.711 ********* 2025-06-19 10:14:53.645402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:14:53.645413 | orchestrator | 2025-06-19 10:14:53.645424 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-19 10:14:53.645435 | orchestrator | Thursday 19 June 2025 10:14:51 +0000 (0:00:01.371) 0:00:21.082 ********* 2025-06-19 10:14:53.645445 | orchestrator | changed: [testbed-manager] 2025-06-19 10:14:53.645456 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:14:53.645467 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:14:53.645477 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:14:53.645488 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:14:53.645499 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:14:53.645509 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:14:53.645520 | orchestrator | 2025-06-19 10:14:53.645564 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:14:53.645588 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:14:53.645620 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:14:53.645633 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:14:53.645652 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:14:53.645672 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:14:53.645691 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:14:53.645709 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:14:53.645728 | orchestrator | 2025-06-19 10:14:53.645739 | orchestrator | 2025-06-19 10:14:53.645750 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:14:53.645769 | orchestrator | Thursday 19 June 2025 10:14:53 +0000 (0:00:01.930) 0:00:23.013 ********* 2025-06-19 10:14:53.645780 | orchestrator | =============================================================================== 2025-06-19 10:14:53.645791 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.88s 2025-06-19 10:14:53.645802 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.99s 2025-06-19 10:14:53.645812 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.93s 2025-06-19 10:14:53.645823 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.37s 2025-06-19 10:14:53.645834 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.21s 2025-06-19 10:14:53.645845 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.02s 2025-06-19 10:14:53.645855 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.98s 2025-06-19 10:14:53.645866 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.84s 2025-06-19 10:14:53.645877 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.56s 2025-06-19 10:14:53.870787 | orchestrator | ++ semver latest 7.1.1 2025-06-19 10:14:53.924676 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-19 10:14:53.924738 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-19 10:14:53.924750 | orchestrator | + sudo systemctl restart manager.service 2025-06-19 10:15:34.580408 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-19 10:15:34.580596 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-19 10:15:34.580625 | orchestrator | + local max_attempts=60 2025-06-19 10:15:34.580640 | orchestrator | + local name=ceph-ansible 2025-06-19 10:15:34.581632 | orchestrator | + local attempt_num=1 2025-06-19 10:15:34.581658 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:15:34.616475 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:15:34.616623 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:15:34.616641 | orchestrator | + sleep 5 2025-06-19 10:15:39.618709 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:15:39.652129 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:15:39.652219 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:15:39.652242 | orchestrator | + sleep 5 2025-06-19 10:15:44.655845 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:15:44.698565 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:15:44.698761 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:15:44.698828 | orchestrator | + sleep 5 2025-06-19 10:15:49.703537 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:15:49.739197 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:15:49.739286 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:15:49.739309 | orchestrator | + sleep 5 2025-06-19 10:15:54.744927 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:15:54.773074 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:15:54.773159 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:15:54.773174 | orchestrator | + sleep 5 2025-06-19 10:15:59.778174 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:15:59.818831 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:15:59.818899 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:15:59.818905 | orchestrator | + sleep 5 2025-06-19 10:16:04.823888 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:16:04.864538 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:16:04.864658 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:16:04.864675 | orchestrator | + sleep 5 2025-06-19 10:16:09.870216 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:16:09.900032 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-19 10:16:09.900134 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:16:09.900150 | orchestrator | + sleep 5 2025-06-19 10:16:14.908350 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:16:14.964174 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-19 10:16:14.964257 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:16:14.964271 | orchestrator | + sleep 5 2025-06-19 10:16:19.967123 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:16:20.000433 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-19 10:16:20.000506 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:16:20.000520 | orchestrator | + sleep 5 2025-06-19 10:16:25.004775 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:16:25.042381 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-19 10:16:25.042463 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:16:25.042478 | orchestrator | + sleep 5 2025-06-19 10:16:30.047149 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:16:30.084070 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-19 10:16:30.084156 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:16:30.084172 | orchestrator | + sleep 5 2025-06-19 10:16:35.088206 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:16:35.134978 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-19 10:16:35.135078 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-19 10:16:35.135101 | orchestrator | + sleep 5 2025-06-19 10:16:40.139299 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-19 10:16:40.182896 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:16:40.182980 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-19 10:16:40.182996 | orchestrator | + local max_attempts=60 2025-06-19 10:16:40.183009 | orchestrator | + local name=kolla-ansible 2025-06-19 10:16:40.183020 | orchestrator | + local attempt_num=1 2025-06-19 10:16:40.183032 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-19 10:16:40.221282 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:16:40.221347 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-19 10:16:40.221361 | orchestrator | + local max_attempts=60 2025-06-19 10:16:40.221373 | orchestrator | + local name=osism-ansible 2025-06-19 10:16:40.221385 | orchestrator | + local attempt_num=1 2025-06-19 10:16:40.221773 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-19 10:16:40.255203 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-19 10:16:40.255263 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-19 10:16:40.255275 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-19 10:16:40.423480 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-19 10:16:40.561609 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-19 10:16:40.837932 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-19 10:16:40.839397 | orchestrator | + osism apply gather-facts 2025-06-19 10:16:42.515971 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:16:42.516070 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:16:42.516085 | orchestrator | Registering Redlock._release_script 2025-06-19 10:16:42.566866 | orchestrator | 2025-06-19 10:16:42 | INFO  | Task e03f2d78-d4dd-4624-aba7-9d002a48d6fe (gather-facts) was prepared for execution. 2025-06-19 10:16:42.566919 | orchestrator | 2025-06-19 10:16:42 | INFO  | It takes a moment until task e03f2d78-d4dd-4624-aba7-9d002a48d6fe (gather-facts) has been started and output is visible here. 2025-06-19 10:16:52.461854 | orchestrator | 2025-06-19 10:16:52.461939 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-19 10:16:52.461947 | orchestrator | 2025-06-19 10:16:52.461951 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-19 10:16:52.461956 | orchestrator | Thursday 19 June 2025 10:16:46 +0000 (0:00:00.199) 0:00:00.199 ********* 2025-06-19 10:16:52.461961 | orchestrator | ok: [testbed-manager] 2025-06-19 10:16:52.461966 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:16:52.461970 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:16:52.461974 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:16:52.461979 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:16:52.461983 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:16:52.461987 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:16:52.461991 | orchestrator | 2025-06-19 10:16:52.461995 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-19 10:16:52.461999 | orchestrator | 2025-06-19 10:16:52.462003 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-19 10:16:52.462008 | orchestrator | Thursday 19 June 2025 10:16:51 +0000 (0:00:05.747) 0:00:05.946 ********* 2025-06-19 10:16:52.462012 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:16:52.462038 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:16:52.462042 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:16:52.462046 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:16:52.462050 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:16:52.462054 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:16:52.462058 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:16:52.462062 | orchestrator | 2025-06-19 10:16:52.462066 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:16:52.462070 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:16:52.462075 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:16:52.462079 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:16:52.462083 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:16:52.462086 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:16:52.462090 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:16:52.462094 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:16:52.462098 | orchestrator | 2025-06-19 10:16:52.462102 | orchestrator | 2025-06-19 10:16:52.462105 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:16:52.462109 | orchestrator | Thursday 19 June 2025 10:16:52 +0000 (0:00:00.375) 0:00:06.322 ********* 2025-06-19 10:16:52.462113 | orchestrator | =============================================================================== 2025-06-19 10:16:52.462117 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.75s 2025-06-19 10:16:52.462138 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.38s 2025-06-19 10:16:52.625172 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-19 10:16:52.635710 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-19 10:16:52.649119 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-19 10:16:52.657742 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-19 10:16:52.669641 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-19 10:16:52.679534 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-19 10:16:52.688186 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-19 10:16:52.696935 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-19 10:16:52.708409 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-19 10:16:52.726964 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-19 10:16:52.742394 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-19 10:16:52.751563 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-19 10:16:52.765280 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-19 10:16:52.777705 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-19 10:16:52.788856 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-19 10:16:52.804913 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-19 10:16:52.819125 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-19 10:16:52.833547 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-19 10:16:52.854213 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-19 10:16:52.871934 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-19 10:16:52.887795 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-19 10:16:53.252991 | orchestrator | ok: Runtime: 0:20:28.859448 2025-06-19 10:16:53.363692 | 2025-06-19 10:16:53.363821 | TASK [Deploy services] 2025-06-19 10:16:53.898668 | orchestrator | skipping: Conditional result was False 2025-06-19 10:16:53.918284 | 2025-06-19 10:16:53.918470 | TASK [Deploy in a nutshell] 2025-06-19 10:16:54.604009 | orchestrator | 2025-06-19 10:16:54.604166 | orchestrator | # PULL IMAGES 2025-06-19 10:16:54.604183 | orchestrator | 2025-06-19 10:16:54.604194 | orchestrator | + set -e 2025-06-19 10:16:54.604207 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-19 10:16:54.604223 | orchestrator | ++ export INTERACTIVE=false 2025-06-19 10:16:54.604234 | orchestrator | ++ INTERACTIVE=false 2025-06-19 10:16:54.604269 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-19 10:16:54.604286 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-19 10:16:54.604297 | orchestrator | + source /opt/manager-vars.sh 2025-06-19 10:16:54.604305 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-19 10:16:54.604319 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-19 10:16:54.604327 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-19 10:16:54.604340 | orchestrator | ++ CEPH_VERSION=reef 2025-06-19 10:16:54.604349 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-19 10:16:54.604362 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-19 10:16:54.604370 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-19 10:16:54.604380 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-19 10:16:54.604388 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-19 10:16:54.604400 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-19 10:16:54.604408 | orchestrator | ++ export ARA=false 2025-06-19 10:16:54.604416 | orchestrator | ++ ARA=false 2025-06-19 10:16:54.604424 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-19 10:16:54.604432 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-19 10:16:54.604439 | orchestrator | ++ export TEMPEST=false 2025-06-19 10:16:54.604447 | orchestrator | ++ TEMPEST=false 2025-06-19 10:16:54.604455 | orchestrator | ++ export IS_ZUUL=true 2025-06-19 10:16:54.604463 | orchestrator | ++ IS_ZUUL=true 2025-06-19 10:16:54.604471 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 10:16:54.604479 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 10:16:54.604486 | orchestrator | ++ export EXTERNAL_API=false 2025-06-19 10:16:54.604494 | orchestrator | ++ EXTERNAL_API=false 2025-06-19 10:16:54.604501 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-19 10:16:54.604510 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-19 10:16:54.604518 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-19 10:16:54.604525 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-19 10:16:54.604533 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-19 10:16:54.604546 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-19 10:16:54.604554 | orchestrator | + echo 2025-06-19 10:16:54.604562 | orchestrator | + echo '# PULL IMAGES' 2025-06-19 10:16:54.604570 | orchestrator | + echo 2025-06-19 10:16:54.604671 | orchestrator | ++ semver latest 7.0.0 2025-06-19 10:16:54.644768 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-19 10:16:54.644830 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-19 10:16:54.644839 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-19 10:16:56.067492 | orchestrator | 2025-06-19 10:16:56 | INFO  | Trying to run play pull-images in environment custom 2025-06-19 10:16:56.071451 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:16:56.071521 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:16:56.071533 | orchestrator | Registering Redlock._release_script 2025-06-19 10:16:56.123377 | orchestrator | 2025-06-19 10:16:56 | INFO  | Task e2293ed7-6b4b-4d99-afcc-584abd62d971 (pull-images) was prepared for execution. 2025-06-19 10:16:56.123463 | orchestrator | 2025-06-19 10:16:56 | INFO  | It takes a moment until task e2293ed7-6b4b-4d99-afcc-584abd62d971 (pull-images) has been started and output is visible here. 2025-06-19 10:18:56.315436 | orchestrator | 2025-06-19 10:18:56.315561 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-19 10:18:56.315579 | orchestrator | 2025-06-19 10:18:56.315591 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-19 10:18:56.315614 | orchestrator | Thursday 19 June 2025 10:16:59 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-06-19 10:18:56.315625 | orchestrator | changed: [testbed-manager] 2025-06-19 10:18:56.315637 | orchestrator | 2025-06-19 10:18:56.315649 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-19 10:18:56.315660 | orchestrator | Thursday 19 June 2025 10:18:04 +0000 (0:01:04.607) 0:01:04.766 ********* 2025-06-19 10:18:56.315672 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-19 10:18:56.315687 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-19 10:18:56.315698 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-19 10:18:56.315741 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-19 10:18:56.315757 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-19 10:18:56.315768 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-19 10:18:56.315779 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-19 10:18:56.315790 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-19 10:18:56.315800 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-19 10:18:56.315811 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-19 10:18:56.315822 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-19 10:18:56.315833 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-19 10:18:56.315844 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-19 10:18:56.315854 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-19 10:18:56.315903 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-19 10:18:56.315916 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-19 10:18:56.315927 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-19 10:18:56.315938 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-19 10:18:56.315949 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-19 10:18:56.315960 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-19 10:18:56.315970 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-19 10:18:56.315981 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-19 10:18:56.315992 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-19 10:18:56.316002 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-19 10:18:56.316013 | orchestrator | 2025-06-19 10:18:56.316024 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:18:56.316036 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:18:56.316048 | orchestrator | 2025-06-19 10:18:56.316059 | orchestrator | 2025-06-19 10:18:56.316070 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:18:56.316081 | orchestrator | Thursday 19 June 2025 10:18:55 +0000 (0:00:51.522) 0:01:56.289 ********* 2025-06-19 10:18:56.316093 | orchestrator | =============================================================================== 2025-06-19 10:18:56.316103 | orchestrator | Pull keystone image ---------------------------------------------------- 64.61s 2025-06-19 10:18:56.316114 | orchestrator | Pull other images ------------------------------------------------------ 51.52s 2025-06-19 10:18:58.340332 | orchestrator | 2025-06-19 10:18:58 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-19 10:18:58.344490 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:18:58.344524 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:18:58.344536 | orchestrator | Registering Redlock._release_script 2025-06-19 10:18:58.405258 | orchestrator | 2025-06-19 10:18:58 | INFO  | Task ca02aed1-568f-49c0-a691-1f97db2a3e27 (wipe-partitions) was prepared for execution. 2025-06-19 10:18:58.405336 | orchestrator | 2025-06-19 10:18:58 | INFO  | It takes a moment until task ca02aed1-568f-49c0-a691-1f97db2a3e27 (wipe-partitions) has been started and output is visible here. 2025-06-19 10:19:10.172621 | orchestrator | 2025-06-19 10:19:10.172749 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-19 10:19:10.172767 | orchestrator | 2025-06-19 10:19:10.172778 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-19 10:19:10.172791 | orchestrator | Thursday 19 June 2025 10:19:02 +0000 (0:00:00.106) 0:00:00.106 ********* 2025-06-19 10:19:10.172802 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:19:10.172814 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:19:10.172825 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:19:10.172836 | orchestrator | 2025-06-19 10:19:10.172871 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-19 10:19:10.172963 | orchestrator | Thursday 19 June 2025 10:19:02 +0000 (0:00:00.538) 0:00:00.645 ********* 2025-06-19 10:19:10.172978 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:10.172989 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:10.172999 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:19:10.173010 | orchestrator | 2025-06-19 10:19:10.173020 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-19 10:19:10.173031 | orchestrator | Thursday 19 June 2025 10:19:02 +0000 (0:00:00.238) 0:00:00.884 ********* 2025-06-19 10:19:10.173042 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:19:10.173053 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:19:10.173064 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:19:10.173075 | orchestrator | 2025-06-19 10:19:10.173085 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-19 10:19:10.173096 | orchestrator | Thursday 19 June 2025 10:19:03 +0000 (0:00:00.659) 0:00:01.544 ********* 2025-06-19 10:19:10.173108 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:10.173118 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:10.173129 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:19:10.173139 | orchestrator | 2025-06-19 10:19:10.173154 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-19 10:19:10.173166 | orchestrator | Thursday 19 June 2025 10:19:03 +0000 (0:00:00.237) 0:00:01.781 ********* 2025-06-19 10:19:10.173176 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-19 10:19:10.173188 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-19 10:19:10.173199 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-19 10:19:10.173209 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-19 10:19:10.173220 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-19 10:19:10.173231 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-19 10:19:10.173241 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-19 10:19:10.173252 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-19 10:19:10.173263 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-19 10:19:10.173274 | orchestrator | 2025-06-19 10:19:10.173285 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-19 10:19:10.173295 | orchestrator | Thursday 19 June 2025 10:19:04 +0000 (0:00:01.232) 0:00:03.014 ********* 2025-06-19 10:19:10.173307 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-19 10:19:10.173317 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-19 10:19:10.173329 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-19 10:19:10.173340 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-19 10:19:10.173350 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-19 10:19:10.173361 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-19 10:19:10.173372 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-19 10:19:10.173383 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-19 10:19:10.173393 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-19 10:19:10.173404 | orchestrator | 2025-06-19 10:19:10.173415 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-19 10:19:10.173425 | orchestrator | Thursday 19 June 2025 10:19:06 +0000 (0:00:01.378) 0:00:04.393 ********* 2025-06-19 10:19:10.173437 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-19 10:19:10.173447 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-19 10:19:10.173458 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-19 10:19:10.173469 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-19 10:19:10.173480 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-19 10:19:10.173490 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-19 10:19:10.173501 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-19 10:19:10.173519 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-19 10:19:10.173530 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-19 10:19:10.173541 | orchestrator | 2025-06-19 10:19:10.173552 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-19 10:19:10.173563 | orchestrator | Thursday 19 June 2025 10:19:08 +0000 (0:00:02.230) 0:00:06.623 ********* 2025-06-19 10:19:10.173573 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:19:10.173584 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:19:10.173595 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:19:10.173605 | orchestrator | 2025-06-19 10:19:10.173616 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-19 10:19:10.173627 | orchestrator | Thursday 19 June 2025 10:19:09 +0000 (0:00:00.694) 0:00:07.318 ********* 2025-06-19 10:19:10.173638 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:19:10.173654 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:19:10.173665 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:19:10.173676 | orchestrator | 2025-06-19 10:19:10.173687 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:19:10.173699 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:19:10.173711 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:19:10.173742 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:19:10.173754 | orchestrator | 2025-06-19 10:19:10.173764 | orchestrator | 2025-06-19 10:19:10.173775 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:19:10.173786 | orchestrator | Thursday 19 June 2025 10:19:09 +0000 (0:00:00.639) 0:00:07.958 ********* 2025-06-19 10:19:10.173797 | orchestrator | =============================================================================== 2025-06-19 10:19:10.173807 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.23s 2025-06-19 10:19:10.173818 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.38s 2025-06-19 10:19:10.173829 | orchestrator | Check device availability ----------------------------------------------- 1.23s 2025-06-19 10:19:10.173840 | orchestrator | Reload udev rules ------------------------------------------------------- 0.69s 2025-06-19 10:19:10.173850 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.66s 2025-06-19 10:19:10.173861 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2025-06-19 10:19:10.173872 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.54s 2025-06-19 10:19:10.173883 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-06-19 10:19:10.173916 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2025-06-19 10:19:11.571084 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:19:11.571173 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:19:11.571187 | orchestrator | Registering Redlock._release_script 2025-06-19 10:19:11.613081 | orchestrator | 2025-06-19 10:19:11 | INFO  | Task 28ae9848-538c-4a69-b47e-4e9a19b733bc (facts) was prepared for execution. 2025-06-19 10:19:11.614146 | orchestrator | 2025-06-19 10:19:11 | INFO  | It takes a moment until task 28ae9848-538c-4a69-b47e-4e9a19b733bc (facts) has been started and output is visible here. 2025-06-19 10:19:22.263980 | orchestrator | 2025-06-19 10:19:22.264093 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-19 10:19:22.264110 | orchestrator | 2025-06-19 10:19:22.264122 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-19 10:19:22.264134 | orchestrator | Thursday 19 June 2025 10:19:14 +0000 (0:00:00.201) 0:00:00.202 ********* 2025-06-19 10:19:22.264176 | orchestrator | ok: [testbed-manager] 2025-06-19 10:19:22.264189 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:19:22.264200 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:19:22.264210 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:19:22.264221 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:19:22.264231 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:19:22.264241 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:19:22.264252 | orchestrator | 2025-06-19 10:19:22.264263 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-19 10:19:22.264274 | orchestrator | Thursday 19 June 2025 10:19:15 +0000 (0:00:01.031) 0:00:01.233 ********* 2025-06-19 10:19:22.264285 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:19:22.264296 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:19:22.264306 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:19:22.264317 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:19:22.264327 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:22.264337 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:22.264348 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:19:22.264359 | orchestrator | 2025-06-19 10:19:22.264369 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-19 10:19:22.264380 | orchestrator | 2025-06-19 10:19:22.264390 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-19 10:19:22.264401 | orchestrator | Thursday 19 June 2025 10:19:16 +0000 (0:00:01.035) 0:00:02.269 ********* 2025-06-19 10:19:22.264412 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:19:22.264422 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:19:22.264433 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:19:22.264444 | orchestrator | ok: [testbed-manager] 2025-06-19 10:19:22.264455 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:19:22.264467 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:19:22.264479 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:19:22.264490 | orchestrator | 2025-06-19 10:19:22.264503 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-19 10:19:22.264515 | orchestrator | 2025-06-19 10:19:22.264526 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-19 10:19:22.264553 | orchestrator | Thursday 19 June 2025 10:19:21 +0000 (0:00:04.624) 0:00:06.893 ********* 2025-06-19 10:19:22.264566 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:19:22.264578 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:19:22.264590 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:19:22.264602 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:19:22.264614 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:22.264626 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:22.264638 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:19:22.264650 | orchestrator | 2025-06-19 10:19:22.264662 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:19:22.264674 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:19:22.264688 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:19:22.264700 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:19:22.264712 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:19:22.264724 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:19:22.264736 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:19:22.264748 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:19:22.264769 | orchestrator | 2025-06-19 10:19:22.264786 | orchestrator | 2025-06-19 10:19:22.264799 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:19:22.264812 | orchestrator | Thursday 19 June 2025 10:19:21 +0000 (0:00:00.550) 0:00:07.444 ********* 2025-06-19 10:19:22.264823 | orchestrator | =============================================================================== 2025-06-19 10:19:22.264834 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.62s 2025-06-19 10:19:22.264845 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.04s 2025-06-19 10:19:22.264856 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.03s 2025-06-19 10:19:22.264867 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2025-06-19 10:19:24.231198 | orchestrator | 2025-06-19 10:19:24 | INFO  | Task 3e91f7f2-44f0-414b-9190-814445dac04b (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-19 10:19:24.231305 | orchestrator | 2025-06-19 10:19:24 | INFO  | It takes a moment until task 3e91f7f2-44f0-414b-9190-814445dac04b (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-19 10:19:36.339912 | orchestrator | 2025-06-19 10:19:36.340074 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-19 10:19:36.340090 | orchestrator | 2025-06-19 10:19:36.340103 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-19 10:19:36.340114 | orchestrator | Thursday 19 June 2025 10:19:28 +0000 (0:00:00.371) 0:00:00.371 ********* 2025-06-19 10:19:36.340126 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:19:36.340138 | orchestrator | 2025-06-19 10:19:36.340149 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-19 10:19:36.340160 | orchestrator | Thursday 19 June 2025 10:19:28 +0000 (0:00:00.249) 0:00:00.620 ********* 2025-06-19 10:19:36.340171 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:19:36.340183 | orchestrator | 2025-06-19 10:19:36.340194 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340205 | orchestrator | Thursday 19 June 2025 10:19:29 +0000 (0:00:00.284) 0:00:00.905 ********* 2025-06-19 10:19:36.340216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-19 10:19:36.340227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-19 10:19:36.340241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-19 10:19:36.340252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-19 10:19:36.340263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-19 10:19:36.340274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-19 10:19:36.340285 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-19 10:19:36.340295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-19 10:19:36.340306 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-19 10:19:36.340317 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-19 10:19:36.340328 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-19 10:19:36.340339 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-19 10:19:36.340350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-19 10:19:36.340361 | orchestrator | 2025-06-19 10:19:36.340372 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340408 | orchestrator | Thursday 19 June 2025 10:19:29 +0000 (0:00:00.370) 0:00:01.275 ********* 2025-06-19 10:19:36.340422 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.340434 | orchestrator | 2025-06-19 10:19:36.340446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340459 | orchestrator | Thursday 19 June 2025 10:19:29 +0000 (0:00:00.584) 0:00:01.859 ********* 2025-06-19 10:19:36.340470 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.340482 | orchestrator | 2025-06-19 10:19:36.340495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340507 | orchestrator | Thursday 19 June 2025 10:19:30 +0000 (0:00:00.226) 0:00:02.085 ********* 2025-06-19 10:19:36.340519 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.340531 | orchestrator | 2025-06-19 10:19:36.340544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340557 | orchestrator | Thursday 19 June 2025 10:19:30 +0000 (0:00:00.220) 0:00:02.306 ********* 2025-06-19 10:19:36.340569 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.340581 | orchestrator | 2025-06-19 10:19:36.340594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340606 | orchestrator | Thursday 19 June 2025 10:19:30 +0000 (0:00:00.216) 0:00:02.522 ********* 2025-06-19 10:19:36.340626 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.340639 | orchestrator | 2025-06-19 10:19:36.340652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340665 | orchestrator | Thursday 19 June 2025 10:19:30 +0000 (0:00:00.193) 0:00:02.715 ********* 2025-06-19 10:19:36.340677 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.340689 | orchestrator | 2025-06-19 10:19:36.340701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340713 | orchestrator | Thursday 19 June 2025 10:19:31 +0000 (0:00:00.220) 0:00:02.936 ********* 2025-06-19 10:19:36.340725 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.340737 | orchestrator | 2025-06-19 10:19:36.340749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340761 | orchestrator | Thursday 19 June 2025 10:19:31 +0000 (0:00:00.220) 0:00:03.156 ********* 2025-06-19 10:19:36.340773 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.340784 | orchestrator | 2025-06-19 10:19:36.340795 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340806 | orchestrator | Thursday 19 June 2025 10:19:31 +0000 (0:00:00.211) 0:00:03.368 ********* 2025-06-19 10:19:36.340817 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a) 2025-06-19 10:19:36.340829 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a) 2025-06-19 10:19:36.340840 | orchestrator | 2025-06-19 10:19:36.340851 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340862 | orchestrator | Thursday 19 June 2025 10:19:31 +0000 (0:00:00.408) 0:00:03.776 ********* 2025-06-19 10:19:36.340890 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581) 2025-06-19 10:19:36.340902 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581) 2025-06-19 10:19:36.340913 | orchestrator | 2025-06-19 10:19:36.340948 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.340969 | orchestrator | Thursday 19 June 2025 10:19:32 +0000 (0:00:00.433) 0:00:04.209 ********* 2025-06-19 10:19:36.340980 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c) 2025-06-19 10:19:36.340991 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c) 2025-06-19 10:19:36.341002 | orchestrator | 2025-06-19 10:19:36.341013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.341032 | orchestrator | Thursday 19 June 2025 10:19:32 +0000 (0:00:00.637) 0:00:04.847 ********* 2025-06-19 10:19:36.341042 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658) 2025-06-19 10:19:36.341053 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658) 2025-06-19 10:19:36.341064 | orchestrator | 2025-06-19 10:19:36.341081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:36.341092 | orchestrator | Thursday 19 June 2025 10:19:33 +0000 (0:00:00.631) 0:00:05.478 ********* 2025-06-19 10:19:36.341103 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-19 10:19:36.341114 | orchestrator | 2025-06-19 10:19:36.341125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:36.341135 | orchestrator | Thursday 19 June 2025 10:19:34 +0000 (0:00:00.777) 0:00:06.256 ********* 2025-06-19 10:19:36.341146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-19 10:19:36.341157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-19 10:19:36.341168 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-19 10:19:36.341178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-19 10:19:36.341189 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-19 10:19:36.341199 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-19 10:19:36.341210 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-19 10:19:36.341221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-19 10:19:36.341231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-19 10:19:36.341242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-19 10:19:36.341253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-19 10:19:36.341263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-19 10:19:36.341274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-19 10:19:36.341285 | orchestrator | 2025-06-19 10:19:36.341295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:36.341306 | orchestrator | Thursday 19 June 2025 10:19:34 +0000 (0:00:00.388) 0:00:06.644 ********* 2025-06-19 10:19:36.341317 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.341328 | orchestrator | 2025-06-19 10:19:36.341339 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:36.341350 | orchestrator | Thursday 19 June 2025 10:19:34 +0000 (0:00:00.204) 0:00:06.849 ********* 2025-06-19 10:19:36.341361 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.341371 | orchestrator | 2025-06-19 10:19:36.341382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:36.341393 | orchestrator | Thursday 19 June 2025 10:19:35 +0000 (0:00:00.210) 0:00:07.060 ********* 2025-06-19 10:19:36.341404 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.341415 | orchestrator | 2025-06-19 10:19:36.341426 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:36.341436 | orchestrator | Thursday 19 June 2025 10:19:35 +0000 (0:00:00.192) 0:00:07.253 ********* 2025-06-19 10:19:36.341449 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.341468 | orchestrator | 2025-06-19 10:19:36.341486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:36.341505 | orchestrator | Thursday 19 June 2025 10:19:35 +0000 (0:00:00.187) 0:00:07.440 ********* 2025-06-19 10:19:36.341536 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.341555 | orchestrator | 2025-06-19 10:19:36.341573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:36.341591 | orchestrator | Thursday 19 June 2025 10:19:35 +0000 (0:00:00.203) 0:00:07.643 ********* 2025-06-19 10:19:36.341609 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.341628 | orchestrator | 2025-06-19 10:19:36.341647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:36.341666 | orchestrator | Thursday 19 June 2025 10:19:35 +0000 (0:00:00.164) 0:00:07.808 ********* 2025-06-19 10:19:36.341685 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:36.341704 | orchestrator | 2025-06-19 10:19:36.341724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:36.341735 | orchestrator | Thursday 19 June 2025 10:19:36 +0000 (0:00:00.205) 0:00:08.013 ********* 2025-06-19 10:19:36.341754 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.090135 | orchestrator | 2025-06-19 10:19:44.090247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:44.090264 | orchestrator | Thursday 19 June 2025 10:19:36 +0000 (0:00:00.184) 0:00:08.197 ********* 2025-06-19 10:19:44.090276 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-19 10:19:44.090288 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-19 10:19:44.090300 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-19 10:19:44.090310 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-19 10:19:44.090321 | orchestrator | 2025-06-19 10:19:44.090332 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:44.090343 | orchestrator | Thursday 19 June 2025 10:19:37 +0000 (0:00:00.882) 0:00:09.080 ********* 2025-06-19 10:19:44.090354 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.090364 | orchestrator | 2025-06-19 10:19:44.090375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:44.090386 | orchestrator | Thursday 19 June 2025 10:19:37 +0000 (0:00:00.212) 0:00:09.292 ********* 2025-06-19 10:19:44.090396 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.090407 | orchestrator | 2025-06-19 10:19:44.090418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:44.090428 | orchestrator | Thursday 19 June 2025 10:19:37 +0000 (0:00:00.198) 0:00:09.490 ********* 2025-06-19 10:19:44.090439 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.090450 | orchestrator | 2025-06-19 10:19:44.090460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:44.090471 | orchestrator | Thursday 19 June 2025 10:19:37 +0000 (0:00:00.271) 0:00:09.761 ********* 2025-06-19 10:19:44.090482 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.090493 | orchestrator | 2025-06-19 10:19:44.090503 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-19 10:19:44.090514 | orchestrator | Thursday 19 June 2025 10:19:38 +0000 (0:00:00.255) 0:00:10.017 ********* 2025-06-19 10:19:44.090525 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-19 10:19:44.090536 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-19 10:19:44.090546 | orchestrator | 2025-06-19 10:19:44.090557 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-19 10:19:44.090568 | orchestrator | Thursday 19 June 2025 10:19:38 +0000 (0:00:00.256) 0:00:10.274 ********* 2025-06-19 10:19:44.090578 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.090589 | orchestrator | 2025-06-19 10:19:44.090600 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-19 10:19:44.090612 | orchestrator | Thursday 19 June 2025 10:19:38 +0000 (0:00:00.180) 0:00:10.454 ********* 2025-06-19 10:19:44.090624 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.090636 | orchestrator | 2025-06-19 10:19:44.090649 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-19 10:19:44.090704 | orchestrator | Thursday 19 June 2025 10:19:38 +0000 (0:00:00.127) 0:00:10.582 ********* 2025-06-19 10:19:44.090718 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.090730 | orchestrator | 2025-06-19 10:19:44.090742 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-19 10:19:44.090754 | orchestrator | Thursday 19 June 2025 10:19:38 +0000 (0:00:00.152) 0:00:10.735 ********* 2025-06-19 10:19:44.090766 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:19:44.090779 | orchestrator | 2025-06-19 10:19:44.090791 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-19 10:19:44.090803 | orchestrator | Thursday 19 June 2025 10:19:39 +0000 (0:00:00.169) 0:00:10.904 ********* 2025-06-19 10:19:44.090816 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f69fe47-683a-554f-92f7-031e2a26df27'}}) 2025-06-19 10:19:44.090829 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04cfa187-5820-5d05-93de-747bac6f19c1'}}) 2025-06-19 10:19:44.090841 | orchestrator | 2025-06-19 10:19:44.090854 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-19 10:19:44.090866 | orchestrator | Thursday 19 June 2025 10:19:39 +0000 (0:00:00.202) 0:00:11.107 ********* 2025-06-19 10:19:44.090879 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f69fe47-683a-554f-92f7-031e2a26df27'}})  2025-06-19 10:19:44.090898 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04cfa187-5820-5d05-93de-747bac6f19c1'}})  2025-06-19 10:19:44.090911 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.090923 | orchestrator | 2025-06-19 10:19:44.090952 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-19 10:19:44.090965 | orchestrator | Thursday 19 June 2025 10:19:39 +0000 (0:00:00.169) 0:00:11.276 ********* 2025-06-19 10:19:44.090978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f69fe47-683a-554f-92f7-031e2a26df27'}})  2025-06-19 10:19:44.090989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04cfa187-5820-5d05-93de-747bac6f19c1'}})  2025-06-19 10:19:44.091000 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.091010 | orchestrator | 2025-06-19 10:19:44.091021 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-19 10:19:44.091032 | orchestrator | Thursday 19 June 2025 10:19:39 +0000 (0:00:00.150) 0:00:11.427 ********* 2025-06-19 10:19:44.091042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f69fe47-683a-554f-92f7-031e2a26df27'}})  2025-06-19 10:19:44.091053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04cfa187-5820-5d05-93de-747bac6f19c1'}})  2025-06-19 10:19:44.091064 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.091074 | orchestrator | 2025-06-19 10:19:44.091102 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-19 10:19:44.091114 | orchestrator | Thursday 19 June 2025 10:19:39 +0000 (0:00:00.354) 0:00:11.781 ********* 2025-06-19 10:19:44.091125 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:19:44.091135 | orchestrator | 2025-06-19 10:19:44.091146 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-19 10:19:44.091157 | orchestrator | Thursday 19 June 2025 10:19:40 +0000 (0:00:00.157) 0:00:11.939 ********* 2025-06-19 10:19:44.091167 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:19:44.091178 | orchestrator | 2025-06-19 10:19:44.091188 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-19 10:19:44.091199 | orchestrator | Thursday 19 June 2025 10:19:40 +0000 (0:00:00.147) 0:00:12.086 ********* 2025-06-19 10:19:44.091210 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.091220 | orchestrator | 2025-06-19 10:19:44.091231 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-19 10:19:44.091241 | orchestrator | Thursday 19 June 2025 10:19:40 +0000 (0:00:00.132) 0:00:12.219 ********* 2025-06-19 10:19:44.091261 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.091272 | orchestrator | 2025-06-19 10:19:44.091283 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-19 10:19:44.091293 | orchestrator | Thursday 19 June 2025 10:19:40 +0000 (0:00:00.170) 0:00:12.389 ********* 2025-06-19 10:19:44.091304 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.091315 | orchestrator | 2025-06-19 10:19:44.091331 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-19 10:19:44.091342 | orchestrator | Thursday 19 June 2025 10:19:40 +0000 (0:00:00.126) 0:00:12.515 ********* 2025-06-19 10:19:44.091353 | orchestrator | ok: [testbed-node-3] => { 2025-06-19 10:19:44.091363 | orchestrator |  "ceph_osd_devices": { 2025-06-19 10:19:44.091374 | orchestrator |  "sdb": { 2025-06-19 10:19:44.091385 | orchestrator |  "osd_lvm_uuid": "3f69fe47-683a-554f-92f7-031e2a26df27" 2025-06-19 10:19:44.091396 | orchestrator |  }, 2025-06-19 10:19:44.091406 | orchestrator |  "sdc": { 2025-06-19 10:19:44.091417 | orchestrator |  "osd_lvm_uuid": "04cfa187-5820-5d05-93de-747bac6f19c1" 2025-06-19 10:19:44.091427 | orchestrator |  } 2025-06-19 10:19:44.091438 | orchestrator |  } 2025-06-19 10:19:44.091449 | orchestrator | } 2025-06-19 10:19:44.091460 | orchestrator | 2025-06-19 10:19:44.091470 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-19 10:19:44.091481 | orchestrator | Thursday 19 June 2025 10:19:40 +0000 (0:00:00.142) 0:00:12.658 ********* 2025-06-19 10:19:44.091492 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.091502 | orchestrator | 2025-06-19 10:19:44.091513 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-19 10:19:44.091523 | orchestrator | Thursday 19 June 2025 10:19:40 +0000 (0:00:00.133) 0:00:12.791 ********* 2025-06-19 10:19:44.091534 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.091544 | orchestrator | 2025-06-19 10:19:44.091555 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-19 10:19:44.091566 | orchestrator | Thursday 19 June 2025 10:19:41 +0000 (0:00:00.142) 0:00:12.934 ********* 2025-06-19 10:19:44.091576 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:19:44.091587 | orchestrator | 2025-06-19 10:19:44.091598 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-19 10:19:44.091608 | orchestrator | Thursday 19 June 2025 10:19:41 +0000 (0:00:00.150) 0:00:13.084 ********* 2025-06-19 10:19:44.091622 | orchestrator | changed: [testbed-node-3] => { 2025-06-19 10:19:44.091633 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-19 10:19:44.091644 | orchestrator |  "ceph_osd_devices": { 2025-06-19 10:19:44.091655 | orchestrator |  "sdb": { 2025-06-19 10:19:44.091665 | orchestrator |  "osd_lvm_uuid": "3f69fe47-683a-554f-92f7-031e2a26df27" 2025-06-19 10:19:44.091676 | orchestrator |  }, 2025-06-19 10:19:44.091687 | orchestrator |  "sdc": { 2025-06-19 10:19:44.091698 | orchestrator |  "osd_lvm_uuid": "04cfa187-5820-5d05-93de-747bac6f19c1" 2025-06-19 10:19:44.091708 | orchestrator |  } 2025-06-19 10:19:44.091719 | orchestrator |  }, 2025-06-19 10:19:44.091729 | orchestrator |  "lvm_volumes": [ 2025-06-19 10:19:44.091740 | orchestrator |  { 2025-06-19 10:19:44.091750 | orchestrator |  "data": "osd-block-3f69fe47-683a-554f-92f7-031e2a26df27", 2025-06-19 10:19:44.091761 | orchestrator |  "data_vg": "ceph-3f69fe47-683a-554f-92f7-031e2a26df27" 2025-06-19 10:19:44.091772 | orchestrator |  }, 2025-06-19 10:19:44.091782 | orchestrator |  { 2025-06-19 10:19:44.091793 | orchestrator |  "data": "osd-block-04cfa187-5820-5d05-93de-747bac6f19c1", 2025-06-19 10:19:44.091804 | orchestrator |  "data_vg": "ceph-04cfa187-5820-5d05-93de-747bac6f19c1" 2025-06-19 10:19:44.091814 | orchestrator |  } 2025-06-19 10:19:44.091824 | orchestrator |  ] 2025-06-19 10:19:44.091842 | orchestrator |  } 2025-06-19 10:19:44.091852 | orchestrator | } 2025-06-19 10:19:44.091863 | orchestrator | 2025-06-19 10:19:44.091874 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-19 10:19:44.091884 | orchestrator | Thursday 19 June 2025 10:19:41 +0000 (0:00:00.202) 0:00:13.287 ********* 2025-06-19 10:19:44.091895 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:19:44.091905 | orchestrator | 2025-06-19 10:19:44.091916 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-19 10:19:44.091927 | orchestrator | 2025-06-19 10:19:44.091968 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-19 10:19:44.091980 | orchestrator | Thursday 19 June 2025 10:19:43 +0000 (0:00:02.190) 0:00:15.477 ********* 2025-06-19 10:19:44.091990 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-19 10:19:44.092001 | orchestrator | 2025-06-19 10:19:44.092011 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-19 10:19:44.092022 | orchestrator | Thursday 19 June 2025 10:19:43 +0000 (0:00:00.240) 0:00:15.718 ********* 2025-06-19 10:19:44.092032 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:19:44.092043 | orchestrator | 2025-06-19 10:19:44.092054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:44.092071 | orchestrator | Thursday 19 June 2025 10:19:44 +0000 (0:00:00.226) 0:00:15.945 ********* 2025-06-19 10:19:52.315334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-19 10:19:52.315434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-19 10:19:52.315450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-19 10:19:52.315462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-19 10:19:52.315491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-19 10:19:52.315503 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-19 10:19:52.315514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-19 10:19:52.315525 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-19 10:19:52.315536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-19 10:19:52.315547 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-19 10:19:52.315558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-19 10:19:52.315569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-19 10:19:52.315580 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-19 10:19:52.315591 | orchestrator | 2025-06-19 10:19:52.315603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.315615 | orchestrator | Thursday 19 June 2025 10:19:44 +0000 (0:00:00.365) 0:00:16.310 ********* 2025-06-19 10:19:52.315634 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.315653 | orchestrator | 2025-06-19 10:19:52.315671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.315689 | orchestrator | Thursday 19 June 2025 10:19:44 +0000 (0:00:00.207) 0:00:16.518 ********* 2025-06-19 10:19:52.315707 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.315725 | orchestrator | 2025-06-19 10:19:52.315743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.315761 | orchestrator | Thursday 19 June 2025 10:19:44 +0000 (0:00:00.198) 0:00:16.717 ********* 2025-06-19 10:19:52.315781 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.315801 | orchestrator | 2025-06-19 10:19:52.315822 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.315876 | orchestrator | Thursday 19 June 2025 10:19:45 +0000 (0:00:00.203) 0:00:16.920 ********* 2025-06-19 10:19:52.315899 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.315920 | orchestrator | 2025-06-19 10:19:52.315942 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.316019 | orchestrator | Thursday 19 June 2025 10:19:45 +0000 (0:00:00.201) 0:00:17.122 ********* 2025-06-19 10:19:52.316039 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.316058 | orchestrator | 2025-06-19 10:19:52.316076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.316097 | orchestrator | Thursday 19 June 2025 10:19:45 +0000 (0:00:00.213) 0:00:17.336 ********* 2025-06-19 10:19:52.316117 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.316136 | orchestrator | 2025-06-19 10:19:52.316150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.316162 | orchestrator | Thursday 19 June 2025 10:19:46 +0000 (0:00:00.730) 0:00:18.066 ********* 2025-06-19 10:19:52.316174 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.316191 | orchestrator | 2025-06-19 10:19:52.316210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.316229 | orchestrator | Thursday 19 June 2025 10:19:46 +0000 (0:00:00.228) 0:00:18.295 ********* 2025-06-19 10:19:52.316247 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.316266 | orchestrator | 2025-06-19 10:19:52.316284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.316303 | orchestrator | Thursday 19 June 2025 10:19:46 +0000 (0:00:00.249) 0:00:18.544 ********* 2025-06-19 10:19:52.316322 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6) 2025-06-19 10:19:52.316342 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6) 2025-06-19 10:19:52.316360 | orchestrator | 2025-06-19 10:19:52.316378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.316398 | orchestrator | Thursday 19 June 2025 10:19:47 +0000 (0:00:00.414) 0:00:18.958 ********* 2025-06-19 10:19:52.316417 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74) 2025-06-19 10:19:52.316437 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74) 2025-06-19 10:19:52.316448 | orchestrator | 2025-06-19 10:19:52.316459 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.316470 | orchestrator | Thursday 19 June 2025 10:19:47 +0000 (0:00:00.484) 0:00:19.443 ********* 2025-06-19 10:19:52.316480 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175) 2025-06-19 10:19:52.316491 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175) 2025-06-19 10:19:52.316502 | orchestrator | 2025-06-19 10:19:52.316512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.316523 | orchestrator | Thursday 19 June 2025 10:19:48 +0000 (0:00:00.445) 0:00:19.889 ********* 2025-06-19 10:19:52.316555 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b) 2025-06-19 10:19:52.316567 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b) 2025-06-19 10:19:52.316583 | orchestrator | 2025-06-19 10:19:52.316614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:52.316634 | orchestrator | Thursday 19 June 2025 10:19:48 +0000 (0:00:00.406) 0:00:20.295 ********* 2025-06-19 10:19:52.316654 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-19 10:19:52.316673 | orchestrator | 2025-06-19 10:19:52.316686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.316697 | orchestrator | Thursday 19 June 2025 10:19:48 +0000 (0:00:00.334) 0:00:20.629 ********* 2025-06-19 10:19:52.316720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-19 10:19:52.316731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-19 10:19:52.316742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-19 10:19:52.316752 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-19 10:19:52.316763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-19 10:19:52.316773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-19 10:19:52.316783 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-19 10:19:52.316794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-19 10:19:52.316805 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-19 10:19:52.316815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-19 10:19:52.316826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-19 10:19:52.316836 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-19 10:19:52.316847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-19 10:19:52.316857 | orchestrator | 2025-06-19 10:19:52.316868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.316878 | orchestrator | Thursday 19 June 2025 10:19:49 +0000 (0:00:00.382) 0:00:21.012 ********* 2025-06-19 10:19:52.316889 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.316900 | orchestrator | 2025-06-19 10:19:52.316911 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.316921 | orchestrator | Thursday 19 June 2025 10:19:49 +0000 (0:00:00.223) 0:00:21.236 ********* 2025-06-19 10:19:52.316932 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.316942 | orchestrator | 2025-06-19 10:19:52.316985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.316997 | orchestrator | Thursday 19 June 2025 10:19:50 +0000 (0:00:00.715) 0:00:21.951 ********* 2025-06-19 10:19:52.317009 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.317028 | orchestrator | 2025-06-19 10:19:52.317046 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.317065 | orchestrator | Thursday 19 June 2025 10:19:50 +0000 (0:00:00.216) 0:00:22.168 ********* 2025-06-19 10:19:52.317085 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.317103 | orchestrator | 2025-06-19 10:19:52.317115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.317126 | orchestrator | Thursday 19 June 2025 10:19:50 +0000 (0:00:00.233) 0:00:22.401 ********* 2025-06-19 10:19:52.317136 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.317147 | orchestrator | 2025-06-19 10:19:52.317157 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.317168 | orchestrator | Thursday 19 June 2025 10:19:50 +0000 (0:00:00.272) 0:00:22.673 ********* 2025-06-19 10:19:52.317178 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.317189 | orchestrator | 2025-06-19 10:19:52.317199 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.317209 | orchestrator | Thursday 19 June 2025 10:19:51 +0000 (0:00:00.210) 0:00:22.884 ********* 2025-06-19 10:19:52.317220 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.317230 | orchestrator | 2025-06-19 10:19:52.317241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.317251 | orchestrator | Thursday 19 June 2025 10:19:51 +0000 (0:00:00.204) 0:00:23.088 ********* 2025-06-19 10:19:52.317270 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.317281 | orchestrator | 2025-06-19 10:19:52.317292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.317302 | orchestrator | Thursday 19 June 2025 10:19:51 +0000 (0:00:00.197) 0:00:23.286 ********* 2025-06-19 10:19:52.317313 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-19 10:19:52.317324 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-19 10:19:52.317334 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-19 10:19:52.317345 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-19 10:19:52.317356 | orchestrator | 2025-06-19 10:19:52.317366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:52.317377 | orchestrator | Thursday 19 June 2025 10:19:52 +0000 (0:00:00.661) 0:00:23.948 ********* 2025-06-19 10:19:52.317387 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:52.317398 | orchestrator | 2025-06-19 10:19:52.317427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:58.874125 | orchestrator | Thursday 19 June 2025 10:19:52 +0000 (0:00:00.223) 0:00:24.172 ********* 2025-06-19 10:19:58.874237 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.874254 | orchestrator | 2025-06-19 10:19:58.874267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:58.874279 | orchestrator | Thursday 19 June 2025 10:19:52 +0000 (0:00:00.181) 0:00:24.353 ********* 2025-06-19 10:19:58.874290 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.874301 | orchestrator | 2025-06-19 10:19:58.874313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:19:58.874324 | orchestrator | Thursday 19 June 2025 10:19:52 +0000 (0:00:00.200) 0:00:24.554 ********* 2025-06-19 10:19:58.874335 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.874347 | orchestrator | 2025-06-19 10:19:58.874379 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-19 10:19:58.874397 | orchestrator | Thursday 19 June 2025 10:19:52 +0000 (0:00:00.206) 0:00:24.761 ********* 2025-06-19 10:19:58.874408 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-19 10:19:58.874419 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-19 10:19:58.874431 | orchestrator | 2025-06-19 10:19:58.874442 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-19 10:19:58.874453 | orchestrator | Thursday 19 June 2025 10:19:53 +0000 (0:00:00.350) 0:00:25.112 ********* 2025-06-19 10:19:58.874463 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.874475 | orchestrator | 2025-06-19 10:19:58.874486 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-19 10:19:58.874497 | orchestrator | Thursday 19 June 2025 10:19:53 +0000 (0:00:00.137) 0:00:25.249 ********* 2025-06-19 10:19:58.874508 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.874519 | orchestrator | 2025-06-19 10:19:58.874530 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-19 10:19:58.874541 | orchestrator | Thursday 19 June 2025 10:19:53 +0000 (0:00:00.139) 0:00:25.389 ********* 2025-06-19 10:19:58.874554 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.874566 | orchestrator | 2025-06-19 10:19:58.874578 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-19 10:19:58.874590 | orchestrator | Thursday 19 June 2025 10:19:53 +0000 (0:00:00.142) 0:00:25.531 ********* 2025-06-19 10:19:58.874602 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:19:58.874615 | orchestrator | 2025-06-19 10:19:58.874627 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-19 10:19:58.874639 | orchestrator | Thursday 19 June 2025 10:19:53 +0000 (0:00:00.134) 0:00:25.666 ********* 2025-06-19 10:19:58.874652 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6ed986be-d550-5e98-86ee-1d899c3b1ca9'}}) 2025-06-19 10:19:58.874666 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79abc216-b4ba-5883-a19f-da26bd64d731'}}) 2025-06-19 10:19:58.874712 | orchestrator | 2025-06-19 10:19:58.874734 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-19 10:19:58.874755 | orchestrator | Thursday 19 June 2025 10:19:54 +0000 (0:00:00.210) 0:00:25.876 ********* 2025-06-19 10:19:58.874775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6ed986be-d550-5e98-86ee-1d899c3b1ca9'}})  2025-06-19 10:19:58.874796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79abc216-b4ba-5883-a19f-da26bd64d731'}})  2025-06-19 10:19:58.874815 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.874834 | orchestrator | 2025-06-19 10:19:58.874853 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-19 10:19:58.874872 | orchestrator | Thursday 19 June 2025 10:19:54 +0000 (0:00:00.164) 0:00:26.041 ********* 2025-06-19 10:19:58.874894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6ed986be-d550-5e98-86ee-1d899c3b1ca9'}})  2025-06-19 10:19:58.874908 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79abc216-b4ba-5883-a19f-da26bd64d731'}})  2025-06-19 10:19:58.874919 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.874929 | orchestrator | 2025-06-19 10:19:58.874940 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-19 10:19:58.874951 | orchestrator | Thursday 19 June 2025 10:19:54 +0000 (0:00:00.184) 0:00:26.226 ********* 2025-06-19 10:19:58.874987 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6ed986be-d550-5e98-86ee-1d899c3b1ca9'}})  2025-06-19 10:19:58.874999 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79abc216-b4ba-5883-a19f-da26bd64d731'}})  2025-06-19 10:19:58.875010 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.875021 | orchestrator | 2025-06-19 10:19:58.875032 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-19 10:19:58.875043 | orchestrator | Thursday 19 June 2025 10:19:54 +0000 (0:00:00.152) 0:00:26.378 ********* 2025-06-19 10:19:58.875053 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:19:58.875064 | orchestrator | 2025-06-19 10:19:58.875079 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-19 10:19:58.875098 | orchestrator | Thursday 19 June 2025 10:19:54 +0000 (0:00:00.134) 0:00:26.512 ********* 2025-06-19 10:19:58.875114 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:19:58.875125 | orchestrator | 2025-06-19 10:19:58.875135 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-19 10:19:58.875146 | orchestrator | Thursday 19 June 2025 10:19:54 +0000 (0:00:00.142) 0:00:26.655 ********* 2025-06-19 10:19:58.875157 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.875168 | orchestrator | 2025-06-19 10:19:58.875198 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-19 10:19:58.875210 | orchestrator | Thursday 19 June 2025 10:19:54 +0000 (0:00:00.121) 0:00:26.777 ********* 2025-06-19 10:19:58.875220 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.875231 | orchestrator | 2025-06-19 10:19:58.875242 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-19 10:19:58.875253 | orchestrator | Thursday 19 June 2025 10:19:55 +0000 (0:00:00.337) 0:00:27.115 ********* 2025-06-19 10:19:58.875263 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.875274 | orchestrator | 2025-06-19 10:19:58.875285 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-19 10:19:58.875296 | orchestrator | Thursday 19 June 2025 10:19:55 +0000 (0:00:00.157) 0:00:27.272 ********* 2025-06-19 10:19:58.875307 | orchestrator | ok: [testbed-node-4] => { 2025-06-19 10:19:58.875317 | orchestrator |  "ceph_osd_devices": { 2025-06-19 10:19:58.875328 | orchestrator |  "sdb": { 2025-06-19 10:19:58.875339 | orchestrator |  "osd_lvm_uuid": "6ed986be-d550-5e98-86ee-1d899c3b1ca9" 2025-06-19 10:19:58.875363 | orchestrator |  }, 2025-06-19 10:19:58.875374 | orchestrator |  "sdc": { 2025-06-19 10:19:58.875385 | orchestrator |  "osd_lvm_uuid": "79abc216-b4ba-5883-a19f-da26bd64d731" 2025-06-19 10:19:58.875396 | orchestrator |  } 2025-06-19 10:19:58.875407 | orchestrator |  } 2025-06-19 10:19:58.875418 | orchestrator | } 2025-06-19 10:19:58.875429 | orchestrator | 2025-06-19 10:19:58.875440 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-19 10:19:58.875451 | orchestrator | Thursday 19 June 2025 10:19:55 +0000 (0:00:00.148) 0:00:27.420 ********* 2025-06-19 10:19:58.875462 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.875473 | orchestrator | 2025-06-19 10:19:58.875483 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-19 10:19:58.875494 | orchestrator | Thursday 19 June 2025 10:19:55 +0000 (0:00:00.142) 0:00:27.563 ********* 2025-06-19 10:19:58.875505 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.875516 | orchestrator | 2025-06-19 10:19:58.875526 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-19 10:19:58.875537 | orchestrator | Thursday 19 June 2025 10:19:55 +0000 (0:00:00.131) 0:00:27.694 ********* 2025-06-19 10:19:58.875555 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:19:58.875566 | orchestrator | 2025-06-19 10:19:58.875577 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-19 10:19:58.875588 | orchestrator | Thursday 19 June 2025 10:19:55 +0000 (0:00:00.131) 0:00:27.826 ********* 2025-06-19 10:19:58.875598 | orchestrator | changed: [testbed-node-4] => { 2025-06-19 10:19:58.875609 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-19 10:19:58.875620 | orchestrator |  "ceph_osd_devices": { 2025-06-19 10:19:58.875630 | orchestrator |  "sdb": { 2025-06-19 10:19:58.875641 | orchestrator |  "osd_lvm_uuid": "6ed986be-d550-5e98-86ee-1d899c3b1ca9" 2025-06-19 10:19:58.875652 | orchestrator |  }, 2025-06-19 10:19:58.875663 | orchestrator |  "sdc": { 2025-06-19 10:19:58.875673 | orchestrator |  "osd_lvm_uuid": "79abc216-b4ba-5883-a19f-da26bd64d731" 2025-06-19 10:19:58.875684 | orchestrator |  } 2025-06-19 10:19:58.875695 | orchestrator |  }, 2025-06-19 10:19:58.875706 | orchestrator |  "lvm_volumes": [ 2025-06-19 10:19:58.875722 | orchestrator |  { 2025-06-19 10:19:58.875741 | orchestrator |  "data": "osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9", 2025-06-19 10:19:58.875759 | orchestrator |  "data_vg": "ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9" 2025-06-19 10:19:58.875779 | orchestrator |  }, 2025-06-19 10:19:58.875799 | orchestrator |  { 2025-06-19 10:19:58.875819 | orchestrator |  "data": "osd-block-79abc216-b4ba-5883-a19f-da26bd64d731", 2025-06-19 10:19:58.875838 | orchestrator |  "data_vg": "ceph-79abc216-b4ba-5883-a19f-da26bd64d731" 2025-06-19 10:19:58.875857 | orchestrator |  } 2025-06-19 10:19:58.875875 | orchestrator |  ] 2025-06-19 10:19:58.875892 | orchestrator |  } 2025-06-19 10:19:58.875910 | orchestrator | } 2025-06-19 10:19:58.875928 | orchestrator | 2025-06-19 10:19:58.875944 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-19 10:19:58.875987 | orchestrator | Thursday 19 June 2025 10:19:56 +0000 (0:00:00.215) 0:00:28.041 ********* 2025-06-19 10:19:58.876006 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-19 10:19:58.876026 | orchestrator | 2025-06-19 10:19:58.876045 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-19 10:19:58.876063 | orchestrator | 2025-06-19 10:19:58.876083 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-19 10:19:58.876097 | orchestrator | Thursday 19 June 2025 10:19:57 +0000 (0:00:01.111) 0:00:29.153 ********* 2025-06-19 10:19:58.876108 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-19 10:19:58.876118 | orchestrator | 2025-06-19 10:19:58.876129 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-19 10:19:58.876149 | orchestrator | Thursday 19 June 2025 10:19:57 +0000 (0:00:00.499) 0:00:29.652 ********* 2025-06-19 10:19:58.876161 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:19:58.876172 | orchestrator | 2025-06-19 10:19:58.876183 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:19:58.876193 | orchestrator | Thursday 19 June 2025 10:19:58 +0000 (0:00:00.696) 0:00:30.349 ********* 2025-06-19 10:19:58.876204 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-19 10:19:58.876215 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-19 10:19:58.876226 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-19 10:19:58.876237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-19 10:19:58.876247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-19 10:19:58.876258 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-19 10:19:58.876279 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-19 10:20:07.255352 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-19 10:20:07.255428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-19 10:20:07.255443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-19 10:20:07.255454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-19 10:20:07.255465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-19 10:20:07.255475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-19 10:20:07.255487 | orchestrator | 2025-06-19 10:20:07.255498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.255510 | orchestrator | Thursday 19 June 2025 10:19:58 +0000 (0:00:00.377) 0:00:30.727 ********* 2025-06-19 10:20:07.255521 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.255532 | orchestrator | 2025-06-19 10:20:07.255543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.255553 | orchestrator | Thursday 19 June 2025 10:19:59 +0000 (0:00:00.238) 0:00:30.966 ********* 2025-06-19 10:20:07.255564 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.255575 | orchestrator | 2025-06-19 10:20:07.255585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.255596 | orchestrator | Thursday 19 June 2025 10:19:59 +0000 (0:00:00.202) 0:00:31.168 ********* 2025-06-19 10:20:07.255606 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.255617 | orchestrator | 2025-06-19 10:20:07.255627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.255638 | orchestrator | Thursday 19 June 2025 10:19:59 +0000 (0:00:00.213) 0:00:31.382 ********* 2025-06-19 10:20:07.255649 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.255660 | orchestrator | 2025-06-19 10:20:07.255671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.255682 | orchestrator | Thursday 19 June 2025 10:19:59 +0000 (0:00:00.226) 0:00:31.609 ********* 2025-06-19 10:20:07.255692 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.255703 | orchestrator | 2025-06-19 10:20:07.255713 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.255724 | orchestrator | Thursday 19 June 2025 10:19:59 +0000 (0:00:00.204) 0:00:31.813 ********* 2025-06-19 10:20:07.255734 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.255745 | orchestrator | 2025-06-19 10:20:07.255755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.255785 | orchestrator | Thursday 19 June 2025 10:20:00 +0000 (0:00:00.190) 0:00:32.004 ********* 2025-06-19 10:20:07.255797 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.255808 | orchestrator | 2025-06-19 10:20:07.255827 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.255846 | orchestrator | Thursday 19 June 2025 10:20:00 +0000 (0:00:00.206) 0:00:32.210 ********* 2025-06-19 10:20:07.255866 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.255877 | orchestrator | 2025-06-19 10:20:07.255888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.255898 | orchestrator | Thursday 19 June 2025 10:20:00 +0000 (0:00:00.204) 0:00:32.415 ********* 2025-06-19 10:20:07.255969 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e) 2025-06-19 10:20:07.256021 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e) 2025-06-19 10:20:07.256034 | orchestrator | 2025-06-19 10:20:07.256046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.256059 | orchestrator | Thursday 19 June 2025 10:20:01 +0000 (0:00:00.633) 0:00:33.049 ********* 2025-06-19 10:20:07.256071 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f) 2025-06-19 10:20:07.256102 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f) 2025-06-19 10:20:07.256126 | orchestrator | 2025-06-19 10:20:07.256147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.256166 | orchestrator | Thursday 19 June 2025 10:20:02 +0000 (0:00:00.823) 0:00:33.873 ********* 2025-06-19 10:20:07.256178 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d) 2025-06-19 10:20:07.256197 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d) 2025-06-19 10:20:07.256210 | orchestrator | 2025-06-19 10:20:07.256223 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.256270 | orchestrator | Thursday 19 June 2025 10:20:02 +0000 (0:00:00.415) 0:00:34.289 ********* 2025-06-19 10:20:07.256283 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6) 2025-06-19 10:20:07.256294 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6) 2025-06-19 10:20:07.256305 | orchestrator | 2025-06-19 10:20:07.256316 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:07.256327 | orchestrator | Thursday 19 June 2025 10:20:02 +0000 (0:00:00.415) 0:00:34.704 ********* 2025-06-19 10:20:07.256337 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-19 10:20:07.256348 | orchestrator | 2025-06-19 10:20:07.256359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.256369 | orchestrator | Thursday 19 June 2025 10:20:03 +0000 (0:00:00.333) 0:00:35.038 ********* 2025-06-19 10:20:07.256396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-19 10:20:07.256408 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-19 10:20:07.256419 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-19 10:20:07.256429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-19 10:20:07.256469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-19 10:20:07.256481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-19 10:20:07.256492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-19 10:20:07.256502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-19 10:20:07.256523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-19 10:20:07.256533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-19 10:20:07.256544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-19 10:20:07.256555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-19 10:20:07.256565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-19 10:20:07.256576 | orchestrator | 2025-06-19 10:20:07.256586 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.256627 | orchestrator | Thursday 19 June 2025 10:20:03 +0000 (0:00:00.415) 0:00:35.454 ********* 2025-06-19 10:20:07.256639 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.256649 | orchestrator | 2025-06-19 10:20:07.256660 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.256671 | orchestrator | Thursday 19 June 2025 10:20:03 +0000 (0:00:00.210) 0:00:35.665 ********* 2025-06-19 10:20:07.256681 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.256692 | orchestrator | 2025-06-19 10:20:07.256703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.256714 | orchestrator | Thursday 19 June 2025 10:20:03 +0000 (0:00:00.203) 0:00:35.868 ********* 2025-06-19 10:20:07.256724 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.256735 | orchestrator | 2025-06-19 10:20:07.256745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.256756 | orchestrator | Thursday 19 June 2025 10:20:04 +0000 (0:00:00.213) 0:00:36.081 ********* 2025-06-19 10:20:07.256767 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.256777 | orchestrator | 2025-06-19 10:20:07.256788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.256799 | orchestrator | Thursday 19 June 2025 10:20:04 +0000 (0:00:00.204) 0:00:36.286 ********* 2025-06-19 10:20:07.256809 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.256820 | orchestrator | 2025-06-19 10:20:07.256830 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.256841 | orchestrator | Thursday 19 June 2025 10:20:04 +0000 (0:00:00.226) 0:00:36.512 ********* 2025-06-19 10:20:07.256852 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.256862 | orchestrator | 2025-06-19 10:20:07.256873 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.256883 | orchestrator | Thursday 19 June 2025 10:20:05 +0000 (0:00:00.676) 0:00:37.188 ********* 2025-06-19 10:20:07.256894 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.256905 | orchestrator | 2025-06-19 10:20:07.256915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.256926 | orchestrator | Thursday 19 June 2025 10:20:05 +0000 (0:00:00.221) 0:00:37.410 ********* 2025-06-19 10:20:07.256937 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.256947 | orchestrator | 2025-06-19 10:20:07.256958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.256969 | orchestrator | Thursday 19 June 2025 10:20:05 +0000 (0:00:00.192) 0:00:37.603 ********* 2025-06-19 10:20:07.257007 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-19 10:20:07.257028 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-19 10:20:07.257047 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-19 10:20:07.257060 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-19 10:20:07.257070 | orchestrator | 2025-06-19 10:20:07.257081 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.257092 | orchestrator | Thursday 19 June 2025 10:20:06 +0000 (0:00:00.671) 0:00:38.274 ********* 2025-06-19 10:20:07.257102 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.257121 | orchestrator | 2025-06-19 10:20:07.257132 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.257147 | orchestrator | Thursday 19 June 2025 10:20:06 +0000 (0:00:00.207) 0:00:38.482 ********* 2025-06-19 10:20:07.257164 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.257181 | orchestrator | 2025-06-19 10:20:07.257200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.257218 | orchestrator | Thursday 19 June 2025 10:20:06 +0000 (0:00:00.210) 0:00:38.692 ********* 2025-06-19 10:20:07.257238 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.257256 | orchestrator | 2025-06-19 10:20:07.257269 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:07.257279 | orchestrator | Thursday 19 June 2025 10:20:07 +0000 (0:00:00.205) 0:00:38.898 ********* 2025-06-19 10:20:07.257290 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:07.257300 | orchestrator | 2025-06-19 10:20:07.257311 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-19 10:20:07.257330 | orchestrator | Thursday 19 June 2025 10:20:07 +0000 (0:00:00.209) 0:00:39.107 ********* 2025-06-19 10:20:11.539383 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-19 10:20:11.539489 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-19 10:20:11.539505 | orchestrator | 2025-06-19 10:20:11.539518 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-19 10:20:11.539529 | orchestrator | Thursday 19 June 2025 10:20:07 +0000 (0:00:00.177) 0:00:39.285 ********* 2025-06-19 10:20:11.539541 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.539560 | orchestrator | 2025-06-19 10:20:11.539572 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-19 10:20:11.539583 | orchestrator | Thursday 19 June 2025 10:20:07 +0000 (0:00:00.138) 0:00:39.423 ********* 2025-06-19 10:20:11.539594 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.539605 | orchestrator | 2025-06-19 10:20:11.539616 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-19 10:20:11.539627 | orchestrator | Thursday 19 June 2025 10:20:07 +0000 (0:00:00.131) 0:00:39.555 ********* 2025-06-19 10:20:11.539638 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.539649 | orchestrator | 2025-06-19 10:20:11.539660 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-19 10:20:11.539671 | orchestrator | Thursday 19 June 2025 10:20:07 +0000 (0:00:00.134) 0:00:39.689 ********* 2025-06-19 10:20:11.539696 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:20:11.539708 | orchestrator | 2025-06-19 10:20:11.539719 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-19 10:20:11.539730 | orchestrator | Thursday 19 June 2025 10:20:08 +0000 (0:00:00.359) 0:00:40.049 ********* 2025-06-19 10:20:11.539742 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c3fffd7-e076-56d5-815a-37625d7b3693'}}) 2025-06-19 10:20:11.539753 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eebf63d4-54bc-5b4a-b141-3683d252bf06'}}) 2025-06-19 10:20:11.539764 | orchestrator | 2025-06-19 10:20:11.539775 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-19 10:20:11.539785 | orchestrator | Thursday 19 June 2025 10:20:08 +0000 (0:00:00.175) 0:00:40.224 ********* 2025-06-19 10:20:11.539797 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c3fffd7-e076-56d5-815a-37625d7b3693'}})  2025-06-19 10:20:11.539809 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eebf63d4-54bc-5b4a-b141-3683d252bf06'}})  2025-06-19 10:20:11.539820 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.539830 | orchestrator | 2025-06-19 10:20:11.539841 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-19 10:20:11.539852 | orchestrator | Thursday 19 June 2025 10:20:08 +0000 (0:00:00.151) 0:00:40.376 ********* 2025-06-19 10:20:11.539881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c3fffd7-e076-56d5-815a-37625d7b3693'}})  2025-06-19 10:20:11.539893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eebf63d4-54bc-5b4a-b141-3683d252bf06'}})  2025-06-19 10:20:11.539903 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.539914 | orchestrator | 2025-06-19 10:20:11.539925 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-19 10:20:11.539936 | orchestrator | Thursday 19 June 2025 10:20:08 +0000 (0:00:00.166) 0:00:40.542 ********* 2025-06-19 10:20:11.539949 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c3fffd7-e076-56d5-815a-37625d7b3693'}})  2025-06-19 10:20:11.539961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eebf63d4-54bc-5b4a-b141-3683d252bf06'}})  2025-06-19 10:20:11.539973 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.540016 | orchestrator | 2025-06-19 10:20:11.540029 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-19 10:20:11.540041 | orchestrator | Thursday 19 June 2025 10:20:08 +0000 (0:00:00.170) 0:00:40.713 ********* 2025-06-19 10:20:11.540053 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:20:11.540065 | orchestrator | 2025-06-19 10:20:11.540077 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-19 10:20:11.540089 | orchestrator | Thursday 19 June 2025 10:20:08 +0000 (0:00:00.131) 0:00:40.844 ********* 2025-06-19 10:20:11.540102 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:20:11.540114 | orchestrator | 2025-06-19 10:20:11.540132 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-19 10:20:11.540144 | orchestrator | Thursday 19 June 2025 10:20:09 +0000 (0:00:00.163) 0:00:41.008 ********* 2025-06-19 10:20:11.540156 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.540169 | orchestrator | 2025-06-19 10:20:11.540180 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-19 10:20:11.540193 | orchestrator | Thursday 19 June 2025 10:20:09 +0000 (0:00:00.145) 0:00:41.153 ********* 2025-06-19 10:20:11.540205 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.540216 | orchestrator | 2025-06-19 10:20:11.540232 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-19 10:20:11.540244 | orchestrator | Thursday 19 June 2025 10:20:09 +0000 (0:00:00.137) 0:00:41.291 ********* 2025-06-19 10:20:11.540256 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.540269 | orchestrator | 2025-06-19 10:20:11.540281 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-19 10:20:11.540293 | orchestrator | Thursday 19 June 2025 10:20:09 +0000 (0:00:00.140) 0:00:41.432 ********* 2025-06-19 10:20:11.540304 | orchestrator | ok: [testbed-node-5] => { 2025-06-19 10:20:11.540315 | orchestrator |  "ceph_osd_devices": { 2025-06-19 10:20:11.540325 | orchestrator |  "sdb": { 2025-06-19 10:20:11.540336 | orchestrator |  "osd_lvm_uuid": "3c3fffd7-e076-56d5-815a-37625d7b3693" 2025-06-19 10:20:11.540363 | orchestrator |  }, 2025-06-19 10:20:11.540374 | orchestrator |  "sdc": { 2025-06-19 10:20:11.540385 | orchestrator |  "osd_lvm_uuid": "eebf63d4-54bc-5b4a-b141-3683d252bf06" 2025-06-19 10:20:11.540396 | orchestrator |  } 2025-06-19 10:20:11.540406 | orchestrator |  } 2025-06-19 10:20:11.540417 | orchestrator | } 2025-06-19 10:20:11.540428 | orchestrator | 2025-06-19 10:20:11.540439 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-19 10:20:11.540450 | orchestrator | Thursday 19 June 2025 10:20:09 +0000 (0:00:00.150) 0:00:41.582 ********* 2025-06-19 10:20:11.540461 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.540472 | orchestrator | 2025-06-19 10:20:11.540482 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-19 10:20:11.540493 | orchestrator | Thursday 19 June 2025 10:20:09 +0000 (0:00:00.151) 0:00:41.734 ********* 2025-06-19 10:20:11.540511 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.540522 | orchestrator | 2025-06-19 10:20:11.540533 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-19 10:20:11.540544 | orchestrator | Thursday 19 June 2025 10:20:10 +0000 (0:00:00.336) 0:00:42.071 ********* 2025-06-19 10:20:11.540554 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:20:11.540565 | orchestrator | 2025-06-19 10:20:11.540576 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-19 10:20:11.540587 | orchestrator | Thursday 19 June 2025 10:20:10 +0000 (0:00:00.136) 0:00:42.208 ********* 2025-06-19 10:20:11.540597 | orchestrator | changed: [testbed-node-5] => { 2025-06-19 10:20:11.540608 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-19 10:20:11.540619 | orchestrator |  "ceph_osd_devices": { 2025-06-19 10:20:11.540630 | orchestrator |  "sdb": { 2025-06-19 10:20:11.540641 | orchestrator |  "osd_lvm_uuid": "3c3fffd7-e076-56d5-815a-37625d7b3693" 2025-06-19 10:20:11.540652 | orchestrator |  }, 2025-06-19 10:20:11.540663 | orchestrator |  "sdc": { 2025-06-19 10:20:11.540673 | orchestrator |  "osd_lvm_uuid": "eebf63d4-54bc-5b4a-b141-3683d252bf06" 2025-06-19 10:20:11.540684 | orchestrator |  } 2025-06-19 10:20:11.540695 | orchestrator |  }, 2025-06-19 10:20:11.540706 | orchestrator |  "lvm_volumes": [ 2025-06-19 10:20:11.540717 | orchestrator |  { 2025-06-19 10:20:11.540728 | orchestrator |  "data": "osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693", 2025-06-19 10:20:11.540738 | orchestrator |  "data_vg": "ceph-3c3fffd7-e076-56d5-815a-37625d7b3693" 2025-06-19 10:20:11.540749 | orchestrator |  }, 2025-06-19 10:20:11.540760 | orchestrator |  { 2025-06-19 10:20:11.540771 | orchestrator |  "data": "osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06", 2025-06-19 10:20:11.540781 | orchestrator |  "data_vg": "ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06" 2025-06-19 10:20:11.540792 | orchestrator |  } 2025-06-19 10:20:11.540803 | orchestrator |  ] 2025-06-19 10:20:11.540814 | orchestrator |  } 2025-06-19 10:20:11.540824 | orchestrator | } 2025-06-19 10:20:11.540835 | orchestrator | 2025-06-19 10:20:11.540846 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-19 10:20:11.540857 | orchestrator | Thursday 19 June 2025 10:20:10 +0000 (0:00:00.199) 0:00:42.407 ********* 2025-06-19 10:20:11.540867 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-19 10:20:11.540878 | orchestrator | 2025-06-19 10:20:11.540889 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:20:11.540900 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-19 10:20:11.540911 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-19 10:20:11.540922 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-19 10:20:11.540933 | orchestrator | 2025-06-19 10:20:11.540944 | orchestrator | 2025-06-19 10:20:11.540955 | orchestrator | 2025-06-19 10:20:11.540965 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:20:11.540976 | orchestrator | Thursday 19 June 2025 10:20:11 +0000 (0:00:00.976) 0:00:43.383 ********* 2025-06-19 10:20:11.541003 | orchestrator | =============================================================================== 2025-06-19 10:20:11.541014 | orchestrator | Write configuration file ------------------------------------------------ 4.28s 2025-06-19 10:20:11.541025 | orchestrator | Get initial list of available block devices ----------------------------- 1.21s 2025-06-19 10:20:11.541035 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2025-06-19 10:20:11.541046 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2025-06-19 10:20:11.541064 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.99s 2025-06-19 10:20:11.541075 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-06-19 10:20:11.541085 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-06-19 10:20:11.541096 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.79s 2025-06-19 10:20:11.541106 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2025-06-19 10:20:11.541117 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-06-19 10:20:11.541127 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2025-06-19 10:20:11.541143 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.68s 2025-06-19 10:20:11.541154 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-06-19 10:20:11.541165 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-06-19 10:20:11.541182 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.66s 2025-06-19 10:20:11.846292 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-06-19 10:20:11.846388 | orchestrator | Set WAL devices config data --------------------------------------------- 0.65s 2025-06-19 10:20:11.846401 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-06-19 10:20:11.846413 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-06-19 10:20:11.846424 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-06-19 10:20:24.012664 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:20:24.012779 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:20:24.012795 | orchestrator | Registering Redlock._release_script 2025-06-19 10:20:24.085822 | orchestrator | 2025-06-19 10:20:24 | INFO  | Task 12b291af-852b-4707-8fb8-4fc1c715ff7c (sync inventory) is running in background. Output coming soon. 2025-06-19 10:20:42.529182 | orchestrator | 2025-06-19 10:20:25 | INFO  | Starting group_vars file reorganization 2025-06-19 10:20:42.529297 | orchestrator | 2025-06-19 10:20:25 | INFO  | Moved 0 file(s) to their respective directories 2025-06-19 10:20:42.529313 | orchestrator | 2025-06-19 10:20:25 | INFO  | Group_vars file reorganization completed 2025-06-19 10:20:42.529324 | orchestrator | 2025-06-19 10:20:27 | INFO  | Starting variable preparation from inventory 2025-06-19 10:20:42.529335 | orchestrator | 2025-06-19 10:20:28 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-19 10:20:42.529345 | orchestrator | 2025-06-19 10:20:28 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-19 10:20:42.529355 | orchestrator | 2025-06-19 10:20:28 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-19 10:20:42.529365 | orchestrator | 2025-06-19 10:20:28 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-19 10:20:42.529374 | orchestrator | 2025-06-19 10:20:28 | INFO  | Variable preparation completed 2025-06-19 10:20:42.529384 | orchestrator | 2025-06-19 10:20:29 | INFO  | Starting inventory overwrite handling 2025-06-19 10:20:42.529394 | orchestrator | 2025-06-19 10:20:29 | INFO  | Handling group overwrites in 99-overwrite 2025-06-19 10:20:42.529403 | orchestrator | 2025-06-19 10:20:29 | INFO  | Removing group frr:children from 60-generic 2025-06-19 10:20:42.529413 | orchestrator | 2025-06-19 10:20:29 | INFO  | Removing group storage:children from 50-kolla 2025-06-19 10:20:42.529423 | orchestrator | 2025-06-19 10:20:29 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-19 10:20:42.529432 | orchestrator | 2025-06-19 10:20:29 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-19 10:20:42.529467 | orchestrator | 2025-06-19 10:20:29 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-19 10:20:42.529477 | orchestrator | 2025-06-19 10:20:29 | INFO  | Handling group overwrites in 20-roles 2025-06-19 10:20:42.529487 | orchestrator | 2025-06-19 10:20:29 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-19 10:20:42.529496 | orchestrator | 2025-06-19 10:20:29 | INFO  | Removed 6 group(s) in total 2025-06-19 10:20:42.529506 | orchestrator | 2025-06-19 10:20:29 | INFO  | Inventory overwrite handling completed 2025-06-19 10:20:42.529516 | orchestrator | 2025-06-19 10:20:31 | INFO  | Starting merge of inventory files 2025-06-19 10:20:42.529525 | orchestrator | 2025-06-19 10:20:31 | INFO  | Inventory files merged successfully 2025-06-19 10:20:42.529535 | orchestrator | 2025-06-19 10:20:34 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-19 10:20:42.529558 | orchestrator | 2025-06-19 10:20:41 | INFO  | Successfully wrote ClusterShell configuration 2025-06-19 10:20:42.529569 | orchestrator | [master eb9a53d] 2025-06-19-10-20 2025-06-19 10:20:42.529580 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-06-19 10:20:44.225688 | orchestrator | 2025-06-19 10:20:44 | INFO  | Task 06fd6a7f-81cc-4a69-b405-cb9dfebae8df (ceph-create-lvm-devices) was prepared for execution. 2025-06-19 10:20:44.225792 | orchestrator | 2025-06-19 10:20:44 | INFO  | It takes a moment until task 06fd6a7f-81cc-4a69-b405-cb9dfebae8df (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-19 10:20:55.529954 | orchestrator | 2025-06-19 10:20:55.530249 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-19 10:20:55.530274 | orchestrator | 2025-06-19 10:20:55.530287 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-19 10:20:55.530298 | orchestrator | Thursday 19 June 2025 10:20:48 +0000 (0:00:00.279) 0:00:00.279 ********* 2025-06-19 10:20:55.530309 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:20:55.530320 | orchestrator | 2025-06-19 10:20:55.530331 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-19 10:20:55.530342 | orchestrator | Thursday 19 June 2025 10:20:48 +0000 (0:00:00.226) 0:00:00.506 ********* 2025-06-19 10:20:55.530353 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:20:55.530369 | orchestrator | 2025-06-19 10:20:55.530388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.530406 | orchestrator | Thursday 19 June 2025 10:20:48 +0000 (0:00:00.203) 0:00:00.709 ********* 2025-06-19 10:20:55.530425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-19 10:20:55.530443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-19 10:20:55.530461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-19 10:20:55.530479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-19 10:20:55.530498 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-19 10:20:55.530518 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-19 10:20:55.530538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-19 10:20:55.530557 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-19 10:20:55.530568 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-19 10:20:55.530579 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-19 10:20:55.530589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-19 10:20:55.530620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-19 10:20:55.530631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-19 10:20:55.530642 | orchestrator | 2025-06-19 10:20:55.530653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.530663 | orchestrator | Thursday 19 June 2025 10:20:49 +0000 (0:00:00.363) 0:00:01.072 ********* 2025-06-19 10:20:55.530674 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.530685 | orchestrator | 2025-06-19 10:20:55.530695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.530712 | orchestrator | Thursday 19 June 2025 10:20:49 +0000 (0:00:00.375) 0:00:01.448 ********* 2025-06-19 10:20:55.530729 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.530747 | orchestrator | 2025-06-19 10:20:55.530765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.530783 | orchestrator | Thursday 19 June 2025 10:20:49 +0000 (0:00:00.179) 0:00:01.628 ********* 2025-06-19 10:20:55.530801 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.530820 | orchestrator | 2025-06-19 10:20:55.530839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.530858 | orchestrator | Thursday 19 June 2025 10:20:49 +0000 (0:00:00.185) 0:00:01.813 ********* 2025-06-19 10:20:55.530869 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.530880 | orchestrator | 2025-06-19 10:20:55.530891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.530902 | orchestrator | Thursday 19 June 2025 10:20:49 +0000 (0:00:00.174) 0:00:01.988 ********* 2025-06-19 10:20:55.530913 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.530923 | orchestrator | 2025-06-19 10:20:55.530934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.530945 | orchestrator | Thursday 19 June 2025 10:20:50 +0000 (0:00:00.174) 0:00:02.162 ********* 2025-06-19 10:20:55.530955 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.530966 | orchestrator | 2025-06-19 10:20:55.530976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.530987 | orchestrator | Thursday 19 June 2025 10:20:50 +0000 (0:00:00.177) 0:00:02.339 ********* 2025-06-19 10:20:55.530998 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.531008 | orchestrator | 2025-06-19 10:20:55.531019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.531029 | orchestrator | Thursday 19 June 2025 10:20:50 +0000 (0:00:00.174) 0:00:02.514 ********* 2025-06-19 10:20:55.531040 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.531078 | orchestrator | 2025-06-19 10:20:55.531089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.531099 | orchestrator | Thursday 19 June 2025 10:20:50 +0000 (0:00:00.196) 0:00:02.711 ********* 2025-06-19 10:20:55.531110 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a) 2025-06-19 10:20:55.531122 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a) 2025-06-19 10:20:55.531132 | orchestrator | 2025-06-19 10:20:55.531143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.531154 | orchestrator | Thursday 19 June 2025 10:20:51 +0000 (0:00:00.378) 0:00:03.090 ********* 2025-06-19 10:20:55.531184 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581) 2025-06-19 10:20:55.531208 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581) 2025-06-19 10:20:55.531220 | orchestrator | 2025-06-19 10:20:55.531230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.531241 | orchestrator | Thursday 19 June 2025 10:20:51 +0000 (0:00:00.369) 0:00:03.460 ********* 2025-06-19 10:20:55.531262 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c) 2025-06-19 10:20:55.531273 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c) 2025-06-19 10:20:55.531284 | orchestrator | 2025-06-19 10:20:55.531295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.531305 | orchestrator | Thursday 19 June 2025 10:20:52 +0000 (0:00:00.616) 0:00:04.076 ********* 2025-06-19 10:20:55.531316 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658) 2025-06-19 10:20:55.531326 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658) 2025-06-19 10:20:55.531337 | orchestrator | 2025-06-19 10:20:55.531348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:20:55.531358 | orchestrator | Thursday 19 June 2025 10:20:52 +0000 (0:00:00.645) 0:00:04.721 ********* 2025-06-19 10:20:55.531369 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-19 10:20:55.531379 | orchestrator | 2025-06-19 10:20:55.531390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:55.531400 | orchestrator | Thursday 19 June 2025 10:20:53 +0000 (0:00:00.760) 0:00:05.481 ********* 2025-06-19 10:20:55.531411 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-19 10:20:55.531422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-19 10:20:55.531432 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-19 10:20:55.531443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-19 10:20:55.531453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-19 10:20:55.531463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-19 10:20:55.531474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-19 10:20:55.531484 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-19 10:20:55.531495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-19 10:20:55.531505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-19 10:20:55.531516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-19 10:20:55.531526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-19 10:20:55.531537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-19 10:20:55.531547 | orchestrator | 2025-06-19 10:20:55.531558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:55.531569 | orchestrator | Thursday 19 June 2025 10:20:53 +0000 (0:00:00.411) 0:00:05.893 ********* 2025-06-19 10:20:55.531579 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.531590 | orchestrator | 2025-06-19 10:20:55.531600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:55.531611 | orchestrator | Thursday 19 June 2025 10:20:54 +0000 (0:00:00.213) 0:00:06.107 ********* 2025-06-19 10:20:55.531621 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.531632 | orchestrator | 2025-06-19 10:20:55.531642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:55.531653 | orchestrator | Thursday 19 June 2025 10:20:54 +0000 (0:00:00.214) 0:00:06.322 ********* 2025-06-19 10:20:55.531663 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.531674 | orchestrator | 2025-06-19 10:20:55.531684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:55.531701 | orchestrator | Thursday 19 June 2025 10:20:54 +0000 (0:00:00.196) 0:00:06.518 ********* 2025-06-19 10:20:55.531711 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.531722 | orchestrator | 2025-06-19 10:20:55.531732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:55.531743 | orchestrator | Thursday 19 June 2025 10:20:54 +0000 (0:00:00.205) 0:00:06.724 ********* 2025-06-19 10:20:55.531753 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.531764 | orchestrator | 2025-06-19 10:20:55.531775 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:55.531790 | orchestrator | Thursday 19 June 2025 10:20:54 +0000 (0:00:00.195) 0:00:06.920 ********* 2025-06-19 10:20:55.531800 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.531811 | orchestrator | 2025-06-19 10:20:55.531821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:55.531832 | orchestrator | Thursday 19 June 2025 10:20:55 +0000 (0:00:00.203) 0:00:07.124 ********* 2025-06-19 10:20:55.531843 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:20:55.531853 | orchestrator | 2025-06-19 10:20:55.531864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:20:55.531875 | orchestrator | Thursday 19 June 2025 10:20:55 +0000 (0:00:00.210) 0:00:07.334 ********* 2025-06-19 10:20:55.531892 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.441382 | orchestrator | 2025-06-19 10:21:03.441484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:03.441502 | orchestrator | Thursday 19 June 2025 10:20:55 +0000 (0:00:00.227) 0:00:07.561 ********* 2025-06-19 10:21:03.441514 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-19 10:21:03.441535 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-19 10:21:03.441553 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-19 10:21:03.441574 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-19 10:21:03.441592 | orchestrator | 2025-06-19 10:21:03.441612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:03.441633 | orchestrator | Thursday 19 June 2025 10:20:56 +0000 (0:00:01.040) 0:00:08.601 ********* 2025-06-19 10:21:03.441653 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.441673 | orchestrator | 2025-06-19 10:21:03.441684 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:03.441695 | orchestrator | Thursday 19 June 2025 10:20:56 +0000 (0:00:00.201) 0:00:08.803 ********* 2025-06-19 10:21:03.441706 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.441717 | orchestrator | 2025-06-19 10:21:03.441728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:03.441739 | orchestrator | Thursday 19 June 2025 10:20:56 +0000 (0:00:00.182) 0:00:08.986 ********* 2025-06-19 10:21:03.441749 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.441760 | orchestrator | 2025-06-19 10:21:03.441771 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:03.441782 | orchestrator | Thursday 19 June 2025 10:20:57 +0000 (0:00:00.209) 0:00:09.195 ********* 2025-06-19 10:21:03.441793 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.441803 | orchestrator | 2025-06-19 10:21:03.441814 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-19 10:21:03.441825 | orchestrator | Thursday 19 June 2025 10:20:57 +0000 (0:00:00.203) 0:00:09.399 ********* 2025-06-19 10:21:03.441836 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.441847 | orchestrator | 2025-06-19 10:21:03.441857 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-19 10:21:03.441868 | orchestrator | Thursday 19 June 2025 10:20:57 +0000 (0:00:00.139) 0:00:09.539 ********* 2025-06-19 10:21:03.441880 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3f69fe47-683a-554f-92f7-031e2a26df27'}}) 2025-06-19 10:21:03.441891 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '04cfa187-5820-5d05-93de-747bac6f19c1'}}) 2025-06-19 10:21:03.441923 | orchestrator | 2025-06-19 10:21:03.441937 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-19 10:21:03.441950 | orchestrator | Thursday 19 June 2025 10:20:57 +0000 (0:00:00.192) 0:00:09.731 ********* 2025-06-19 10:21:03.441963 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'}) 2025-06-19 10:21:03.441977 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'}) 2025-06-19 10:21:03.441989 | orchestrator | 2025-06-19 10:21:03.442002 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-19 10:21:03.442014 | orchestrator | Thursday 19 June 2025 10:20:59 +0000 (0:00:01.949) 0:00:11.680 ********* 2025-06-19 10:21:03.442117 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:03.442139 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:03.442151 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442164 | orchestrator | 2025-06-19 10:21:03.442176 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-19 10:21:03.442188 | orchestrator | Thursday 19 June 2025 10:20:59 +0000 (0:00:00.159) 0:00:11.840 ********* 2025-06-19 10:21:03.442201 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'}) 2025-06-19 10:21:03.442213 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'}) 2025-06-19 10:21:03.442225 | orchestrator | 2025-06-19 10:21:03.442238 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-19 10:21:03.442250 | orchestrator | Thursday 19 June 2025 10:21:01 +0000 (0:00:01.453) 0:00:13.293 ********* 2025-06-19 10:21:03.442262 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:03.442273 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:03.442284 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442295 | orchestrator | 2025-06-19 10:21:03.442306 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-19 10:21:03.442317 | orchestrator | Thursday 19 June 2025 10:21:01 +0000 (0:00:00.151) 0:00:13.444 ********* 2025-06-19 10:21:03.442328 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442338 | orchestrator | 2025-06-19 10:21:03.442349 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-19 10:21:03.442380 | orchestrator | Thursday 19 June 2025 10:21:01 +0000 (0:00:00.134) 0:00:13.578 ********* 2025-06-19 10:21:03.442391 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:03.442402 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:03.442413 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442424 | orchestrator | 2025-06-19 10:21:03.442435 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-19 10:21:03.442446 | orchestrator | Thursday 19 June 2025 10:21:01 +0000 (0:00:00.348) 0:00:13.926 ********* 2025-06-19 10:21:03.442456 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442477 | orchestrator | 2025-06-19 10:21:03.442488 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-19 10:21:03.442499 | orchestrator | Thursday 19 June 2025 10:21:02 +0000 (0:00:00.135) 0:00:14.062 ********* 2025-06-19 10:21:03.442526 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:03.442538 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:03.442548 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442559 | orchestrator | 2025-06-19 10:21:03.442570 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-19 10:21:03.442581 | orchestrator | Thursday 19 June 2025 10:21:02 +0000 (0:00:00.154) 0:00:14.217 ********* 2025-06-19 10:21:03.442591 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442602 | orchestrator | 2025-06-19 10:21:03.442612 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-19 10:21:03.442623 | orchestrator | Thursday 19 June 2025 10:21:02 +0000 (0:00:00.138) 0:00:14.355 ********* 2025-06-19 10:21:03.442635 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:03.442646 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:03.442657 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442667 | orchestrator | 2025-06-19 10:21:03.442678 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-19 10:21:03.442689 | orchestrator | Thursday 19 June 2025 10:21:02 +0000 (0:00:00.156) 0:00:14.511 ********* 2025-06-19 10:21:03.442699 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:21:03.442711 | orchestrator | 2025-06-19 10:21:03.442721 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-19 10:21:03.442732 | orchestrator | Thursday 19 June 2025 10:21:02 +0000 (0:00:00.147) 0:00:14.659 ********* 2025-06-19 10:21:03.442742 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:03.442753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:03.442764 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442775 | orchestrator | 2025-06-19 10:21:03.442785 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-19 10:21:03.442796 | orchestrator | Thursday 19 June 2025 10:21:02 +0000 (0:00:00.145) 0:00:14.805 ********* 2025-06-19 10:21:03.442807 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:03.442818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:03.442828 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442839 | orchestrator | 2025-06-19 10:21:03.442849 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-19 10:21:03.442860 | orchestrator | Thursday 19 June 2025 10:21:02 +0000 (0:00:00.187) 0:00:14.993 ********* 2025-06-19 10:21:03.442871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:03.442886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:03.442904 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442915 | orchestrator | 2025-06-19 10:21:03.442926 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-19 10:21:03.442937 | orchestrator | Thursday 19 June 2025 10:21:03 +0000 (0:00:00.153) 0:00:15.146 ********* 2025-06-19 10:21:03.442947 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.442958 | orchestrator | 2025-06-19 10:21:03.442969 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-19 10:21:03.442980 | orchestrator | Thursday 19 June 2025 10:21:03 +0000 (0:00:00.173) 0:00:15.319 ********* 2025-06-19 10:21:03.442990 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:03.443001 | orchestrator | 2025-06-19 10:21:03.443018 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-19 10:21:09.729286 | orchestrator | Thursday 19 June 2025 10:21:03 +0000 (0:00:00.154) 0:00:15.473 ********* 2025-06-19 10:21:09.729401 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.729417 | orchestrator | 2025-06-19 10:21:09.729430 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-19 10:21:09.729441 | orchestrator | Thursday 19 June 2025 10:21:03 +0000 (0:00:00.127) 0:00:15.601 ********* 2025-06-19 10:21:09.729452 | orchestrator | ok: [testbed-node-3] => { 2025-06-19 10:21:09.729464 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-19 10:21:09.729475 | orchestrator | } 2025-06-19 10:21:09.729487 | orchestrator | 2025-06-19 10:21:09.729498 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-19 10:21:09.729509 | orchestrator | Thursday 19 June 2025 10:21:03 +0000 (0:00:00.152) 0:00:15.754 ********* 2025-06-19 10:21:09.729520 | orchestrator | ok: [testbed-node-3] => { 2025-06-19 10:21:09.729530 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-19 10:21:09.729541 | orchestrator | } 2025-06-19 10:21:09.729552 | orchestrator | 2025-06-19 10:21:09.729563 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-19 10:21:09.729574 | orchestrator | Thursday 19 June 2025 10:21:04 +0000 (0:00:00.345) 0:00:16.099 ********* 2025-06-19 10:21:09.729585 | orchestrator | ok: [testbed-node-3] => { 2025-06-19 10:21:09.729596 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-19 10:21:09.729607 | orchestrator | } 2025-06-19 10:21:09.729618 | orchestrator | 2025-06-19 10:21:09.729630 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-19 10:21:09.729641 | orchestrator | Thursday 19 June 2025 10:21:04 +0000 (0:00:00.142) 0:00:16.242 ********* 2025-06-19 10:21:09.729652 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:21:09.729663 | orchestrator | 2025-06-19 10:21:09.729674 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-19 10:21:09.729685 | orchestrator | Thursday 19 June 2025 10:21:04 +0000 (0:00:00.670) 0:00:16.912 ********* 2025-06-19 10:21:09.729696 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:21:09.729707 | orchestrator | 2025-06-19 10:21:09.729718 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-19 10:21:09.729729 | orchestrator | Thursday 19 June 2025 10:21:05 +0000 (0:00:00.537) 0:00:17.450 ********* 2025-06-19 10:21:09.729740 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:21:09.729751 | orchestrator | 2025-06-19 10:21:09.729762 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-19 10:21:09.729773 | orchestrator | Thursday 19 June 2025 10:21:05 +0000 (0:00:00.538) 0:00:17.988 ********* 2025-06-19 10:21:09.729784 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:21:09.729795 | orchestrator | 2025-06-19 10:21:09.729805 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-19 10:21:09.729816 | orchestrator | Thursday 19 June 2025 10:21:06 +0000 (0:00:00.153) 0:00:18.141 ********* 2025-06-19 10:21:09.729827 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.729838 | orchestrator | 2025-06-19 10:21:09.729849 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-19 10:21:09.729860 | orchestrator | Thursday 19 June 2025 10:21:06 +0000 (0:00:00.108) 0:00:18.250 ********* 2025-06-19 10:21:09.729893 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.729904 | orchestrator | 2025-06-19 10:21:09.729915 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-19 10:21:09.729926 | orchestrator | Thursday 19 June 2025 10:21:06 +0000 (0:00:00.107) 0:00:18.357 ********* 2025-06-19 10:21:09.729936 | orchestrator | ok: [testbed-node-3] => { 2025-06-19 10:21:09.729947 | orchestrator |  "vgs_report": { 2025-06-19 10:21:09.729959 | orchestrator |  "vg": [] 2025-06-19 10:21:09.729970 | orchestrator |  } 2025-06-19 10:21:09.730006 | orchestrator | } 2025-06-19 10:21:09.730067 | orchestrator | 2025-06-19 10:21:09.730080 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-19 10:21:09.730091 | orchestrator | Thursday 19 June 2025 10:21:06 +0000 (0:00:00.149) 0:00:18.507 ********* 2025-06-19 10:21:09.730102 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730112 | orchestrator | 2025-06-19 10:21:09.730123 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-19 10:21:09.730134 | orchestrator | Thursday 19 June 2025 10:21:06 +0000 (0:00:00.133) 0:00:18.641 ********* 2025-06-19 10:21:09.730145 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730156 | orchestrator | 2025-06-19 10:21:09.730167 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-19 10:21:09.730178 | orchestrator | Thursday 19 June 2025 10:21:06 +0000 (0:00:00.133) 0:00:18.775 ********* 2025-06-19 10:21:09.730189 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730199 | orchestrator | 2025-06-19 10:21:09.730210 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-19 10:21:09.730221 | orchestrator | Thursday 19 June 2025 10:21:06 +0000 (0:00:00.136) 0:00:18.911 ********* 2025-06-19 10:21:09.730232 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730243 | orchestrator | 2025-06-19 10:21:09.730254 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-19 10:21:09.730265 | orchestrator | Thursday 19 June 2025 10:21:07 +0000 (0:00:00.329) 0:00:19.241 ********* 2025-06-19 10:21:09.730275 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730286 | orchestrator | 2025-06-19 10:21:09.730297 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-19 10:21:09.730308 | orchestrator | Thursday 19 June 2025 10:21:07 +0000 (0:00:00.146) 0:00:19.387 ********* 2025-06-19 10:21:09.730319 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730330 | orchestrator | 2025-06-19 10:21:09.730341 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-19 10:21:09.730352 | orchestrator | Thursday 19 June 2025 10:21:07 +0000 (0:00:00.144) 0:00:19.532 ********* 2025-06-19 10:21:09.730362 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730373 | orchestrator | 2025-06-19 10:21:09.730384 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-19 10:21:09.730395 | orchestrator | Thursday 19 June 2025 10:21:07 +0000 (0:00:00.145) 0:00:19.677 ********* 2025-06-19 10:21:09.730406 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730416 | orchestrator | 2025-06-19 10:21:09.730427 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-19 10:21:09.730458 | orchestrator | Thursday 19 June 2025 10:21:07 +0000 (0:00:00.122) 0:00:19.800 ********* 2025-06-19 10:21:09.730469 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730480 | orchestrator | 2025-06-19 10:21:09.730491 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-19 10:21:09.730501 | orchestrator | Thursday 19 June 2025 10:21:07 +0000 (0:00:00.144) 0:00:19.944 ********* 2025-06-19 10:21:09.730512 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730523 | orchestrator | 2025-06-19 10:21:09.730533 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-19 10:21:09.730544 | orchestrator | Thursday 19 June 2025 10:21:08 +0000 (0:00:00.129) 0:00:20.073 ********* 2025-06-19 10:21:09.730555 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730575 | orchestrator | 2025-06-19 10:21:09.730586 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-19 10:21:09.730597 | orchestrator | Thursday 19 June 2025 10:21:08 +0000 (0:00:00.125) 0:00:20.199 ********* 2025-06-19 10:21:09.730607 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730618 | orchestrator | 2025-06-19 10:21:09.730629 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-19 10:21:09.730640 | orchestrator | Thursday 19 June 2025 10:21:08 +0000 (0:00:00.142) 0:00:20.342 ********* 2025-06-19 10:21:09.730651 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730661 | orchestrator | 2025-06-19 10:21:09.730672 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-19 10:21:09.730683 | orchestrator | Thursday 19 June 2025 10:21:08 +0000 (0:00:00.147) 0:00:20.490 ********* 2025-06-19 10:21:09.730694 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730704 | orchestrator | 2025-06-19 10:21:09.730715 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-19 10:21:09.730726 | orchestrator | Thursday 19 June 2025 10:21:08 +0000 (0:00:00.142) 0:00:20.633 ********* 2025-06-19 10:21:09.730738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:09.730750 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:09.730761 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730771 | orchestrator | 2025-06-19 10:21:09.730782 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-19 10:21:09.730793 | orchestrator | Thursday 19 June 2025 10:21:08 +0000 (0:00:00.161) 0:00:20.794 ********* 2025-06-19 10:21:09.730804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:09.730815 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:09.730826 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730836 | orchestrator | 2025-06-19 10:21:09.730847 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-19 10:21:09.730858 | orchestrator | Thursday 19 June 2025 10:21:09 +0000 (0:00:00.354) 0:00:21.149 ********* 2025-06-19 10:21:09.730869 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:09.730880 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:09.730891 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730901 | orchestrator | 2025-06-19 10:21:09.730928 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-19 10:21:09.730940 | orchestrator | Thursday 19 June 2025 10:21:09 +0000 (0:00:00.153) 0:00:21.303 ********* 2025-06-19 10:21:09.730951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:09.730962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:09.730972 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.730999 | orchestrator | 2025-06-19 10:21:09.731010 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-19 10:21:09.731021 | orchestrator | Thursday 19 June 2025 10:21:09 +0000 (0:00:00.163) 0:00:21.466 ********* 2025-06-19 10:21:09.731032 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:09.731055 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:09.731066 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:09.731077 | orchestrator | 2025-06-19 10:21:09.731088 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-19 10:21:09.731099 | orchestrator | Thursday 19 June 2025 10:21:09 +0000 (0:00:00.146) 0:00:21.613 ********* 2025-06-19 10:21:09.731110 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:09.731127 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:15.034379 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:15.034478 | orchestrator | 2025-06-19 10:21:15.034493 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-19 10:21:15.034507 | orchestrator | Thursday 19 June 2025 10:21:09 +0000 (0:00:00.151) 0:00:21.764 ********* 2025-06-19 10:21:15.034519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:15.034533 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:15.034544 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:15.034556 | orchestrator | 2025-06-19 10:21:15.034567 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-19 10:21:15.034579 | orchestrator | Thursday 19 June 2025 10:21:09 +0000 (0:00:00.152) 0:00:21.916 ********* 2025-06-19 10:21:15.034590 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:15.034601 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:15.034613 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:15.034624 | orchestrator | 2025-06-19 10:21:15.034635 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-19 10:21:15.034646 | orchestrator | Thursday 19 June 2025 10:21:10 +0000 (0:00:00.146) 0:00:22.062 ********* 2025-06-19 10:21:15.034658 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:21:15.034670 | orchestrator | 2025-06-19 10:21:15.034681 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-19 10:21:15.034692 | orchestrator | Thursday 19 June 2025 10:21:10 +0000 (0:00:00.525) 0:00:22.588 ********* 2025-06-19 10:21:15.034703 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:21:15.034714 | orchestrator | 2025-06-19 10:21:15.034725 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-19 10:21:15.034737 | orchestrator | Thursday 19 June 2025 10:21:11 +0000 (0:00:00.536) 0:00:23.125 ********* 2025-06-19 10:21:15.034747 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:21:15.034759 | orchestrator | 2025-06-19 10:21:15.034770 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-19 10:21:15.034781 | orchestrator | Thursday 19 June 2025 10:21:11 +0000 (0:00:00.140) 0:00:23.265 ********* 2025-06-19 10:21:15.034793 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'vg_name': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'}) 2025-06-19 10:21:15.034805 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'vg_name': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'}) 2025-06-19 10:21:15.034816 | orchestrator | 2025-06-19 10:21:15.034848 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-19 10:21:15.034860 | orchestrator | Thursday 19 June 2025 10:21:11 +0000 (0:00:00.172) 0:00:23.438 ********* 2025-06-19 10:21:15.034871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:15.034885 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:15.034897 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:15.034910 | orchestrator | 2025-06-19 10:21:15.034946 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-19 10:21:15.034960 | orchestrator | Thursday 19 June 2025 10:21:11 +0000 (0:00:00.153) 0:00:23.592 ********* 2025-06-19 10:21:15.034973 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:15.034985 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:15.034998 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:15.035011 | orchestrator | 2025-06-19 10:21:15.035024 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-19 10:21:15.035037 | orchestrator | Thursday 19 June 2025 10:21:11 +0000 (0:00:00.341) 0:00:23.934 ********* 2025-06-19 10:21:15.035064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'})  2025-06-19 10:21:15.035077 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'})  2025-06-19 10:21:15.035090 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:21:15.035103 | orchestrator | 2025-06-19 10:21:15.035115 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-19 10:21:15.035128 | orchestrator | Thursday 19 June 2025 10:21:12 +0000 (0:00:00.161) 0:00:24.096 ********* 2025-06-19 10:21:15.035141 | orchestrator | ok: [testbed-node-3] => { 2025-06-19 10:21:15.035153 | orchestrator |  "lvm_report": { 2025-06-19 10:21:15.035165 | orchestrator |  "lv": [ 2025-06-19 10:21:15.035177 | orchestrator |  { 2025-06-19 10:21:15.035208 | orchestrator |  "lv_name": "osd-block-04cfa187-5820-5d05-93de-747bac6f19c1", 2025-06-19 10:21:15.035223 | orchestrator |  "vg_name": "ceph-04cfa187-5820-5d05-93de-747bac6f19c1" 2025-06-19 10:21:15.035235 | orchestrator |  }, 2025-06-19 10:21:15.035245 | orchestrator |  { 2025-06-19 10:21:15.035256 | orchestrator |  "lv_name": "osd-block-3f69fe47-683a-554f-92f7-031e2a26df27", 2025-06-19 10:21:15.035267 | orchestrator |  "vg_name": "ceph-3f69fe47-683a-554f-92f7-031e2a26df27" 2025-06-19 10:21:15.035277 | orchestrator |  } 2025-06-19 10:21:15.035288 | orchestrator |  ], 2025-06-19 10:21:15.035299 | orchestrator |  "pv": [ 2025-06-19 10:21:15.035309 | orchestrator |  { 2025-06-19 10:21:15.035319 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-19 10:21:15.035330 | orchestrator |  "vg_name": "ceph-3f69fe47-683a-554f-92f7-031e2a26df27" 2025-06-19 10:21:15.035340 | orchestrator |  }, 2025-06-19 10:21:15.035351 | orchestrator |  { 2025-06-19 10:21:15.035361 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-19 10:21:15.035372 | orchestrator |  "vg_name": "ceph-04cfa187-5820-5d05-93de-747bac6f19c1" 2025-06-19 10:21:15.035383 | orchestrator |  } 2025-06-19 10:21:15.035393 | orchestrator |  ] 2025-06-19 10:21:15.035404 | orchestrator |  } 2025-06-19 10:21:15.035416 | orchestrator | } 2025-06-19 10:21:15.035427 | orchestrator | 2025-06-19 10:21:15.035438 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-19 10:21:15.035456 | orchestrator | 2025-06-19 10:21:15.035467 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-19 10:21:15.035477 | orchestrator | Thursday 19 June 2025 10:21:12 +0000 (0:00:00.284) 0:00:24.380 ********* 2025-06-19 10:21:15.035488 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-19 10:21:15.035499 | orchestrator | 2025-06-19 10:21:15.035509 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-19 10:21:15.035520 | orchestrator | Thursday 19 June 2025 10:21:12 +0000 (0:00:00.243) 0:00:24.623 ********* 2025-06-19 10:21:15.035530 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:21:15.035541 | orchestrator | 2025-06-19 10:21:15.035551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:15.035562 | orchestrator | Thursday 19 June 2025 10:21:12 +0000 (0:00:00.247) 0:00:24.871 ********* 2025-06-19 10:21:15.035572 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-19 10:21:15.035583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-19 10:21:15.035594 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-19 10:21:15.035604 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-19 10:21:15.035615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-19 10:21:15.035625 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-19 10:21:15.035636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-19 10:21:15.035646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-19 10:21:15.035657 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-19 10:21:15.035667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-19 10:21:15.035678 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-19 10:21:15.035688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-19 10:21:15.035699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-19 10:21:15.035709 | orchestrator | 2025-06-19 10:21:15.035720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:15.035730 | orchestrator | Thursday 19 June 2025 10:21:13 +0000 (0:00:00.396) 0:00:25.268 ********* 2025-06-19 10:21:15.035741 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:15.035751 | orchestrator | 2025-06-19 10:21:15.035762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:15.035772 | orchestrator | Thursday 19 June 2025 10:21:13 +0000 (0:00:00.202) 0:00:25.470 ********* 2025-06-19 10:21:15.035783 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:15.035793 | orchestrator | 2025-06-19 10:21:15.035804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:15.035814 | orchestrator | Thursday 19 June 2025 10:21:13 +0000 (0:00:00.188) 0:00:25.659 ********* 2025-06-19 10:21:15.035825 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:15.035835 | orchestrator | 2025-06-19 10:21:15.035846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:15.035857 | orchestrator | Thursday 19 June 2025 10:21:13 +0000 (0:00:00.204) 0:00:25.863 ********* 2025-06-19 10:21:15.035867 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:15.035878 | orchestrator | 2025-06-19 10:21:15.035888 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:15.035899 | orchestrator | Thursday 19 June 2025 10:21:14 +0000 (0:00:00.590) 0:00:26.453 ********* 2025-06-19 10:21:15.035909 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:15.035941 | orchestrator | 2025-06-19 10:21:15.035953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:15.035963 | orchestrator | Thursday 19 June 2025 10:21:14 +0000 (0:00:00.206) 0:00:26.660 ********* 2025-06-19 10:21:15.035974 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:15.035985 | orchestrator | 2025-06-19 10:21:15.035996 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:15.036006 | orchestrator | Thursday 19 June 2025 10:21:14 +0000 (0:00:00.205) 0:00:26.866 ********* 2025-06-19 10:21:15.036017 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:15.036028 | orchestrator | 2025-06-19 10:21:15.036045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:25.083113 | orchestrator | Thursday 19 June 2025 10:21:15 +0000 (0:00:00.200) 0:00:27.067 ********* 2025-06-19 10:21:25.083221 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.083237 | orchestrator | 2025-06-19 10:21:25.083250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:25.083261 | orchestrator | Thursday 19 June 2025 10:21:15 +0000 (0:00:00.220) 0:00:27.287 ********* 2025-06-19 10:21:25.083273 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6) 2025-06-19 10:21:25.083285 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6) 2025-06-19 10:21:25.083296 | orchestrator | 2025-06-19 10:21:25.083307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:25.083318 | orchestrator | Thursday 19 June 2025 10:21:15 +0000 (0:00:00.438) 0:00:27.725 ********* 2025-06-19 10:21:25.083329 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74) 2025-06-19 10:21:25.083340 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74) 2025-06-19 10:21:25.083350 | orchestrator | 2025-06-19 10:21:25.083361 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:25.083372 | orchestrator | Thursday 19 June 2025 10:21:16 +0000 (0:00:00.419) 0:00:28.144 ********* 2025-06-19 10:21:25.083383 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175) 2025-06-19 10:21:25.083394 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175) 2025-06-19 10:21:25.083405 | orchestrator | 2025-06-19 10:21:25.083416 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:25.083426 | orchestrator | Thursday 19 June 2025 10:21:16 +0000 (0:00:00.434) 0:00:28.579 ********* 2025-06-19 10:21:25.083437 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b) 2025-06-19 10:21:25.083468 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b) 2025-06-19 10:21:25.083479 | orchestrator | 2025-06-19 10:21:25.083490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:25.083501 | orchestrator | Thursday 19 June 2025 10:21:16 +0000 (0:00:00.418) 0:00:28.998 ********* 2025-06-19 10:21:25.083512 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-19 10:21:25.083523 | orchestrator | 2025-06-19 10:21:25.083535 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.083546 | orchestrator | Thursday 19 June 2025 10:21:17 +0000 (0:00:00.326) 0:00:29.324 ********* 2025-06-19 10:21:25.083557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-19 10:21:25.083568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-19 10:21:25.083579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-19 10:21:25.083590 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-19 10:21:25.083620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-19 10:21:25.083633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-19 10:21:25.083645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-19 10:21:25.083657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-19 10:21:25.083670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-19 10:21:25.083682 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-19 10:21:25.083697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-19 10:21:25.083716 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-19 10:21:25.083734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-19 10:21:25.083753 | orchestrator | 2025-06-19 10:21:25.083773 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.083798 | orchestrator | Thursday 19 June 2025 10:21:17 +0000 (0:00:00.569) 0:00:29.893 ********* 2025-06-19 10:21:25.083811 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.083881 | orchestrator | 2025-06-19 10:21:25.083894 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.083907 | orchestrator | Thursday 19 June 2025 10:21:18 +0000 (0:00:00.195) 0:00:30.089 ********* 2025-06-19 10:21:25.083919 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.083931 | orchestrator | 2025-06-19 10:21:25.083943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.083956 | orchestrator | Thursday 19 June 2025 10:21:18 +0000 (0:00:00.215) 0:00:30.304 ********* 2025-06-19 10:21:25.083968 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.083979 | orchestrator | 2025-06-19 10:21:25.083990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.084001 | orchestrator | Thursday 19 June 2025 10:21:18 +0000 (0:00:00.189) 0:00:30.493 ********* 2025-06-19 10:21:25.084011 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084022 | orchestrator | 2025-06-19 10:21:25.084054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.084066 | orchestrator | Thursday 19 June 2025 10:21:18 +0000 (0:00:00.201) 0:00:30.695 ********* 2025-06-19 10:21:25.084076 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084087 | orchestrator | 2025-06-19 10:21:25.084098 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.084109 | orchestrator | Thursday 19 June 2025 10:21:18 +0000 (0:00:00.208) 0:00:30.904 ********* 2025-06-19 10:21:25.084119 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084130 | orchestrator | 2025-06-19 10:21:25.084141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.084152 | orchestrator | Thursday 19 June 2025 10:21:19 +0000 (0:00:00.203) 0:00:31.107 ********* 2025-06-19 10:21:25.084162 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084173 | orchestrator | 2025-06-19 10:21:25.084184 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.084194 | orchestrator | Thursday 19 June 2025 10:21:19 +0000 (0:00:00.203) 0:00:31.311 ********* 2025-06-19 10:21:25.084205 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084216 | orchestrator | 2025-06-19 10:21:25.084226 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.084237 | orchestrator | Thursday 19 June 2025 10:21:19 +0000 (0:00:00.221) 0:00:31.532 ********* 2025-06-19 10:21:25.084248 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-19 10:21:25.084258 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-19 10:21:25.084278 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-19 10:21:25.084289 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-19 10:21:25.084300 | orchestrator | 2025-06-19 10:21:25.084310 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.084321 | orchestrator | Thursday 19 June 2025 10:21:20 +0000 (0:00:00.813) 0:00:32.346 ********* 2025-06-19 10:21:25.084332 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084342 | orchestrator | 2025-06-19 10:21:25.084353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.084364 | orchestrator | Thursday 19 June 2025 10:21:20 +0000 (0:00:00.197) 0:00:32.544 ********* 2025-06-19 10:21:25.084374 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084385 | orchestrator | 2025-06-19 10:21:25.084396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.084406 | orchestrator | Thursday 19 June 2025 10:21:20 +0000 (0:00:00.195) 0:00:32.739 ********* 2025-06-19 10:21:25.084417 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084428 | orchestrator | 2025-06-19 10:21:25.084438 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:25.084449 | orchestrator | Thursday 19 June 2025 10:21:21 +0000 (0:00:00.642) 0:00:33.382 ********* 2025-06-19 10:21:25.084460 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084470 | orchestrator | 2025-06-19 10:21:25.084481 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-19 10:21:25.084492 | orchestrator | Thursday 19 June 2025 10:21:21 +0000 (0:00:00.197) 0:00:33.579 ********* 2025-06-19 10:21:25.084502 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084513 | orchestrator | 2025-06-19 10:21:25.084524 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-19 10:21:25.084534 | orchestrator | Thursday 19 June 2025 10:21:21 +0000 (0:00:00.130) 0:00:33.710 ********* 2025-06-19 10:21:25.084545 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6ed986be-d550-5e98-86ee-1d899c3b1ca9'}}) 2025-06-19 10:21:25.084556 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '79abc216-b4ba-5883-a19f-da26bd64d731'}}) 2025-06-19 10:21:25.084567 | orchestrator | 2025-06-19 10:21:25.084578 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-19 10:21:25.084588 | orchestrator | Thursday 19 June 2025 10:21:21 +0000 (0:00:00.180) 0:00:33.891 ********* 2025-06-19 10:21:25.084600 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'}) 2025-06-19 10:21:25.084612 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'}) 2025-06-19 10:21:25.084623 | orchestrator | 2025-06-19 10:21:25.084634 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-19 10:21:25.084645 | orchestrator | Thursday 19 June 2025 10:21:23 +0000 (0:00:01.772) 0:00:35.663 ********* 2025-06-19 10:21:25.084661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:25.084673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:25.084684 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:25.084694 | orchestrator | 2025-06-19 10:21:25.084705 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-19 10:21:25.084716 | orchestrator | Thursday 19 June 2025 10:21:23 +0000 (0:00:00.153) 0:00:35.817 ********* 2025-06-19 10:21:25.084726 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'}) 2025-06-19 10:21:25.084749 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'}) 2025-06-19 10:21:25.084770 | orchestrator | 2025-06-19 10:21:25.084798 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-19 10:21:30.577455 | orchestrator | Thursday 19 June 2025 10:21:25 +0000 (0:00:01.295) 0:00:37.113 ********* 2025-06-19 10:21:30.577556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:30.577573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:30.577585 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.577598 | orchestrator | 2025-06-19 10:21:30.577610 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-19 10:21:30.577621 | orchestrator | Thursday 19 June 2025 10:21:25 +0000 (0:00:00.155) 0:00:37.269 ********* 2025-06-19 10:21:30.577631 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.577642 | orchestrator | 2025-06-19 10:21:30.577653 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-19 10:21:30.577664 | orchestrator | Thursday 19 June 2025 10:21:25 +0000 (0:00:00.129) 0:00:37.398 ********* 2025-06-19 10:21:30.577674 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:30.577685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:30.577696 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.577707 | orchestrator | 2025-06-19 10:21:30.577717 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-19 10:21:30.577728 | orchestrator | Thursday 19 June 2025 10:21:25 +0000 (0:00:00.165) 0:00:37.563 ********* 2025-06-19 10:21:30.577739 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.577749 | orchestrator | 2025-06-19 10:21:30.577798 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-19 10:21:30.577810 | orchestrator | Thursday 19 June 2025 10:21:25 +0000 (0:00:00.148) 0:00:37.712 ********* 2025-06-19 10:21:30.577820 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:30.577831 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:30.577842 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.577853 | orchestrator | 2025-06-19 10:21:30.577863 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-19 10:21:30.577874 | orchestrator | Thursday 19 June 2025 10:21:25 +0000 (0:00:00.158) 0:00:37.870 ********* 2025-06-19 10:21:30.577885 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.577896 | orchestrator | 2025-06-19 10:21:30.577906 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-19 10:21:30.577917 | orchestrator | Thursday 19 June 2025 10:21:26 +0000 (0:00:00.370) 0:00:38.241 ********* 2025-06-19 10:21:30.577928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:30.577938 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:30.577949 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.577960 | orchestrator | 2025-06-19 10:21:30.577970 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-19 10:21:30.578003 | orchestrator | Thursday 19 June 2025 10:21:26 +0000 (0:00:00.155) 0:00:38.397 ********* 2025-06-19 10:21:30.578114 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:21:30.578131 | orchestrator | 2025-06-19 10:21:30.578145 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-19 10:21:30.578157 | orchestrator | Thursday 19 June 2025 10:21:26 +0000 (0:00:00.149) 0:00:38.546 ********* 2025-06-19 10:21:30.578170 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:30.578182 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:30.578194 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.578207 | orchestrator | 2025-06-19 10:21:30.578219 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-19 10:21:30.578231 | orchestrator | Thursday 19 June 2025 10:21:26 +0000 (0:00:00.152) 0:00:38.698 ********* 2025-06-19 10:21:30.578243 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:30.578254 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:30.578267 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.578278 | orchestrator | 2025-06-19 10:21:30.578291 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-19 10:21:30.578303 | orchestrator | Thursday 19 June 2025 10:21:26 +0000 (0:00:00.153) 0:00:38.852 ********* 2025-06-19 10:21:30.578333 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:30.578344 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:30.578355 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.578366 | orchestrator | 2025-06-19 10:21:30.578377 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-19 10:21:30.578387 | orchestrator | Thursday 19 June 2025 10:21:26 +0000 (0:00:00.144) 0:00:38.996 ********* 2025-06-19 10:21:30.578398 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.578409 | orchestrator | 2025-06-19 10:21:30.578419 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-19 10:21:30.578430 | orchestrator | Thursday 19 June 2025 10:21:27 +0000 (0:00:00.139) 0:00:39.135 ********* 2025-06-19 10:21:30.578440 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.578451 | orchestrator | 2025-06-19 10:21:30.578461 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-19 10:21:30.578472 | orchestrator | Thursday 19 June 2025 10:21:27 +0000 (0:00:00.136) 0:00:39.272 ********* 2025-06-19 10:21:30.578482 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.578493 | orchestrator | 2025-06-19 10:21:30.578504 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-19 10:21:30.578514 | orchestrator | Thursday 19 June 2025 10:21:27 +0000 (0:00:00.127) 0:00:39.399 ********* 2025-06-19 10:21:30.578525 | orchestrator | ok: [testbed-node-4] => { 2025-06-19 10:21:30.578536 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-19 10:21:30.578547 | orchestrator | } 2025-06-19 10:21:30.578558 | orchestrator | 2025-06-19 10:21:30.578569 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-19 10:21:30.578580 | orchestrator | Thursday 19 June 2025 10:21:27 +0000 (0:00:00.140) 0:00:39.540 ********* 2025-06-19 10:21:30.578590 | orchestrator | ok: [testbed-node-4] => { 2025-06-19 10:21:30.578601 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-19 10:21:30.578612 | orchestrator | } 2025-06-19 10:21:30.578631 | orchestrator | 2025-06-19 10:21:30.578642 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-19 10:21:30.578653 | orchestrator | Thursday 19 June 2025 10:21:27 +0000 (0:00:00.141) 0:00:39.681 ********* 2025-06-19 10:21:30.578663 | orchestrator | ok: [testbed-node-4] => { 2025-06-19 10:21:30.578674 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-19 10:21:30.578685 | orchestrator | } 2025-06-19 10:21:30.578695 | orchestrator | 2025-06-19 10:21:30.578706 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-19 10:21:30.578717 | orchestrator | Thursday 19 June 2025 10:21:27 +0000 (0:00:00.138) 0:00:39.820 ********* 2025-06-19 10:21:30.578728 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:21:30.578738 | orchestrator | 2025-06-19 10:21:30.578749 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-19 10:21:30.578794 | orchestrator | Thursday 19 June 2025 10:21:28 +0000 (0:00:00.715) 0:00:40.536 ********* 2025-06-19 10:21:30.578806 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:21:30.578817 | orchestrator | 2025-06-19 10:21:30.578828 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-19 10:21:30.578839 | orchestrator | Thursday 19 June 2025 10:21:28 +0000 (0:00:00.510) 0:00:41.046 ********* 2025-06-19 10:21:30.578849 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:21:30.578860 | orchestrator | 2025-06-19 10:21:30.578870 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-19 10:21:30.578881 | orchestrator | Thursday 19 June 2025 10:21:29 +0000 (0:00:00.489) 0:00:41.535 ********* 2025-06-19 10:21:30.578892 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:21:30.578902 | orchestrator | 2025-06-19 10:21:30.578913 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-19 10:21:30.578923 | orchestrator | Thursday 19 June 2025 10:21:29 +0000 (0:00:00.143) 0:00:41.679 ********* 2025-06-19 10:21:30.578934 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.578945 | orchestrator | 2025-06-19 10:21:30.578955 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-19 10:21:30.578966 | orchestrator | Thursday 19 June 2025 10:21:29 +0000 (0:00:00.094) 0:00:41.773 ********* 2025-06-19 10:21:30.578977 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.578987 | orchestrator | 2025-06-19 10:21:30.578998 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-19 10:21:30.579008 | orchestrator | Thursday 19 June 2025 10:21:29 +0000 (0:00:00.124) 0:00:41.897 ********* 2025-06-19 10:21:30.579019 | orchestrator | ok: [testbed-node-4] => { 2025-06-19 10:21:30.579029 | orchestrator |  "vgs_report": { 2025-06-19 10:21:30.579041 | orchestrator |  "vg": [] 2025-06-19 10:21:30.579051 | orchestrator |  } 2025-06-19 10:21:30.579062 | orchestrator | } 2025-06-19 10:21:30.579072 | orchestrator | 2025-06-19 10:21:30.579083 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-19 10:21:30.579099 | orchestrator | Thursday 19 June 2025 10:21:30 +0000 (0:00:00.150) 0:00:42.048 ********* 2025-06-19 10:21:30.579109 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.579120 | orchestrator | 2025-06-19 10:21:30.579131 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-19 10:21:30.579141 | orchestrator | Thursday 19 June 2025 10:21:30 +0000 (0:00:00.135) 0:00:42.184 ********* 2025-06-19 10:21:30.579152 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.579162 | orchestrator | 2025-06-19 10:21:30.579173 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-19 10:21:30.579183 | orchestrator | Thursday 19 June 2025 10:21:30 +0000 (0:00:00.142) 0:00:42.326 ********* 2025-06-19 10:21:30.579194 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.579204 | orchestrator | 2025-06-19 10:21:30.579215 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-19 10:21:30.579226 | orchestrator | Thursday 19 June 2025 10:21:30 +0000 (0:00:00.138) 0:00:42.465 ********* 2025-06-19 10:21:30.579236 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:30.579254 | orchestrator | 2025-06-19 10:21:30.579265 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-19 10:21:30.579282 | orchestrator | Thursday 19 June 2025 10:21:30 +0000 (0:00:00.145) 0:00:42.611 ********* 2025-06-19 10:21:35.349302 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349393 | orchestrator | 2025-06-19 10:21:35.349409 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-19 10:21:35.349422 | orchestrator | Thursday 19 June 2025 10:21:30 +0000 (0:00:00.133) 0:00:42.744 ********* 2025-06-19 10:21:35.349433 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349444 | orchestrator | 2025-06-19 10:21:35.349455 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-19 10:21:35.349466 | orchestrator | Thursday 19 June 2025 10:21:31 +0000 (0:00:00.336) 0:00:43.081 ********* 2025-06-19 10:21:35.349477 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349488 | orchestrator | 2025-06-19 10:21:35.349499 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-19 10:21:35.349509 | orchestrator | Thursday 19 June 2025 10:21:31 +0000 (0:00:00.141) 0:00:43.223 ********* 2025-06-19 10:21:35.349520 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349531 | orchestrator | 2025-06-19 10:21:35.349542 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-19 10:21:35.349553 | orchestrator | Thursday 19 June 2025 10:21:31 +0000 (0:00:00.133) 0:00:43.356 ********* 2025-06-19 10:21:35.349563 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349574 | orchestrator | 2025-06-19 10:21:35.349586 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-19 10:21:35.349597 | orchestrator | Thursday 19 June 2025 10:21:31 +0000 (0:00:00.146) 0:00:43.502 ********* 2025-06-19 10:21:35.349608 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349618 | orchestrator | 2025-06-19 10:21:35.349629 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-19 10:21:35.349640 | orchestrator | Thursday 19 June 2025 10:21:31 +0000 (0:00:00.139) 0:00:43.641 ********* 2025-06-19 10:21:35.349650 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349661 | orchestrator | 2025-06-19 10:21:35.349672 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-19 10:21:35.349682 | orchestrator | Thursday 19 June 2025 10:21:31 +0000 (0:00:00.151) 0:00:43.793 ********* 2025-06-19 10:21:35.349693 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349703 | orchestrator | 2025-06-19 10:21:35.349742 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-19 10:21:35.349753 | orchestrator | Thursday 19 June 2025 10:21:31 +0000 (0:00:00.142) 0:00:43.936 ********* 2025-06-19 10:21:35.349764 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349775 | orchestrator | 2025-06-19 10:21:35.349786 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-19 10:21:35.349796 | orchestrator | Thursday 19 June 2025 10:21:32 +0000 (0:00:00.133) 0:00:44.069 ********* 2025-06-19 10:21:35.349807 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349818 | orchestrator | 2025-06-19 10:21:35.349829 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-19 10:21:35.349841 | orchestrator | Thursday 19 June 2025 10:21:32 +0000 (0:00:00.160) 0:00:44.230 ********* 2025-06-19 10:21:35.349855 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:35.349868 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:35.349880 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349891 | orchestrator | 2025-06-19 10:21:35.349903 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-19 10:21:35.349915 | orchestrator | Thursday 19 June 2025 10:21:32 +0000 (0:00:00.154) 0:00:44.384 ********* 2025-06-19 10:21:35.349950 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:35.349963 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:35.349975 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.349987 | orchestrator | 2025-06-19 10:21:35.350000 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-19 10:21:35.350013 | orchestrator | Thursday 19 June 2025 10:21:32 +0000 (0:00:00.145) 0:00:44.530 ********* 2025-06-19 10:21:35.350079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:35.350105 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:35.350118 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.350130 | orchestrator | 2025-06-19 10:21:35.350143 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-19 10:21:35.350155 | orchestrator | Thursday 19 June 2025 10:21:32 +0000 (0:00:00.157) 0:00:44.687 ********* 2025-06-19 10:21:35.350167 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:35.350179 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:35.350190 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.350201 | orchestrator | 2025-06-19 10:21:35.350211 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-19 10:21:35.350240 | orchestrator | Thursday 19 June 2025 10:21:33 +0000 (0:00:00.362) 0:00:45.050 ********* 2025-06-19 10:21:35.350252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:35.350263 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:35.350273 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.350284 | orchestrator | 2025-06-19 10:21:35.350295 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-19 10:21:35.350305 | orchestrator | Thursday 19 June 2025 10:21:33 +0000 (0:00:00.170) 0:00:45.220 ********* 2025-06-19 10:21:35.350316 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:35.350327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:35.350338 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.350349 | orchestrator | 2025-06-19 10:21:35.350359 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-19 10:21:35.350370 | orchestrator | Thursday 19 June 2025 10:21:33 +0000 (0:00:00.156) 0:00:45.377 ********* 2025-06-19 10:21:35.350381 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:35.350392 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:35.350402 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.350413 | orchestrator | 2025-06-19 10:21:35.350423 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-19 10:21:35.350442 | orchestrator | Thursday 19 June 2025 10:21:33 +0000 (0:00:00.162) 0:00:45.540 ********* 2025-06-19 10:21:35.350453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:35.350464 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:35.350475 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.350485 | orchestrator | 2025-06-19 10:21:35.350496 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-19 10:21:35.350506 | orchestrator | Thursday 19 June 2025 10:21:33 +0000 (0:00:00.166) 0:00:45.706 ********* 2025-06-19 10:21:35.350517 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:21:35.350528 | orchestrator | 2025-06-19 10:21:35.350539 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-19 10:21:35.350549 | orchestrator | Thursday 19 June 2025 10:21:34 +0000 (0:00:00.514) 0:00:46.221 ********* 2025-06-19 10:21:35.350560 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:21:35.350571 | orchestrator | 2025-06-19 10:21:35.350581 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-19 10:21:35.350592 | orchestrator | Thursday 19 June 2025 10:21:34 +0000 (0:00:00.511) 0:00:46.733 ********* 2025-06-19 10:21:35.350603 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:21:35.350613 | orchestrator | 2025-06-19 10:21:35.350624 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-19 10:21:35.350634 | orchestrator | Thursday 19 June 2025 10:21:34 +0000 (0:00:00.152) 0:00:46.885 ********* 2025-06-19 10:21:35.350645 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'vg_name': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'}) 2025-06-19 10:21:35.350657 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'vg_name': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'}) 2025-06-19 10:21:35.350668 | orchestrator | 2025-06-19 10:21:35.350678 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-19 10:21:35.350689 | orchestrator | Thursday 19 June 2025 10:21:35 +0000 (0:00:00.164) 0:00:47.050 ********* 2025-06-19 10:21:35.350705 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:35.350750 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:35.350761 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:35.350772 | orchestrator | 2025-06-19 10:21:35.350782 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-19 10:21:35.350793 | orchestrator | Thursday 19 June 2025 10:21:35 +0000 (0:00:00.157) 0:00:47.208 ********* 2025-06-19 10:21:35.350804 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:35.350815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:35.350832 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:41.357201 | orchestrator | 2025-06-19 10:21:41.357299 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-19 10:21:41.357316 | orchestrator | Thursday 19 June 2025 10:21:35 +0000 (0:00:00.174) 0:00:47.383 ********* 2025-06-19 10:21:41.357330 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'})  2025-06-19 10:21:41.357344 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'})  2025-06-19 10:21:41.357383 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:21:41.357396 | orchestrator | 2025-06-19 10:21:41.357407 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-19 10:21:41.357423 | orchestrator | Thursday 19 June 2025 10:21:35 +0000 (0:00:00.143) 0:00:47.527 ********* 2025-06-19 10:21:41.357434 | orchestrator | ok: [testbed-node-4] => { 2025-06-19 10:21:41.357446 | orchestrator |  "lvm_report": { 2025-06-19 10:21:41.357458 | orchestrator |  "lv": [ 2025-06-19 10:21:41.357469 | orchestrator |  { 2025-06-19 10:21:41.357480 | orchestrator |  "lv_name": "osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9", 2025-06-19 10:21:41.357491 | orchestrator |  "vg_name": "ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9" 2025-06-19 10:21:41.357502 | orchestrator |  }, 2025-06-19 10:21:41.357513 | orchestrator |  { 2025-06-19 10:21:41.357524 | orchestrator |  "lv_name": "osd-block-79abc216-b4ba-5883-a19f-da26bd64d731", 2025-06-19 10:21:41.357535 | orchestrator |  "vg_name": "ceph-79abc216-b4ba-5883-a19f-da26bd64d731" 2025-06-19 10:21:41.357546 | orchestrator |  } 2025-06-19 10:21:41.357557 | orchestrator |  ], 2025-06-19 10:21:41.357567 | orchestrator |  "pv": [ 2025-06-19 10:21:41.357578 | orchestrator |  { 2025-06-19 10:21:41.357589 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-19 10:21:41.357600 | orchestrator |  "vg_name": "ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9" 2025-06-19 10:21:41.357611 | orchestrator |  }, 2025-06-19 10:21:41.357621 | orchestrator |  { 2025-06-19 10:21:41.357632 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-19 10:21:41.357684 | orchestrator |  "vg_name": "ceph-79abc216-b4ba-5883-a19f-da26bd64d731" 2025-06-19 10:21:41.357699 | orchestrator |  } 2025-06-19 10:21:41.357710 | orchestrator |  ] 2025-06-19 10:21:41.357721 | orchestrator |  } 2025-06-19 10:21:41.357732 | orchestrator | } 2025-06-19 10:21:41.357745 | orchestrator | 2025-06-19 10:21:41.357757 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-19 10:21:41.357769 | orchestrator | 2025-06-19 10:21:41.357781 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-19 10:21:41.357794 | orchestrator | Thursday 19 June 2025 10:21:35 +0000 (0:00:00.506) 0:00:48.033 ********* 2025-06-19 10:21:41.357806 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-19 10:21:41.357818 | orchestrator | 2025-06-19 10:21:41.357831 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-19 10:21:41.357843 | orchestrator | Thursday 19 June 2025 10:21:36 +0000 (0:00:00.267) 0:00:48.300 ********* 2025-06-19 10:21:41.357856 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:21:41.357868 | orchestrator | 2025-06-19 10:21:41.357880 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.357892 | orchestrator | Thursday 19 June 2025 10:21:36 +0000 (0:00:00.239) 0:00:48.540 ********* 2025-06-19 10:21:41.357905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-19 10:21:41.357917 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-19 10:21:41.357929 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-19 10:21:41.357941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-19 10:21:41.357953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-19 10:21:41.357966 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-19 10:21:41.357977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-19 10:21:41.357988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-19 10:21:41.358008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-19 10:21:41.358072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-19 10:21:41.358084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-19 10:21:41.358095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-19 10:21:41.358106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-19 10:21:41.358117 | orchestrator | 2025-06-19 10:21:41.358128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358139 | orchestrator | Thursday 19 June 2025 10:21:36 +0000 (0:00:00.410) 0:00:48.950 ********* 2025-06-19 10:21:41.358150 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:41.358161 | orchestrator | 2025-06-19 10:21:41.358172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358183 | orchestrator | Thursday 19 June 2025 10:21:37 +0000 (0:00:00.194) 0:00:49.144 ********* 2025-06-19 10:21:41.358194 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:41.358205 | orchestrator | 2025-06-19 10:21:41.358261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358293 | orchestrator | Thursday 19 June 2025 10:21:37 +0000 (0:00:00.207) 0:00:49.352 ********* 2025-06-19 10:21:41.358305 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:41.358316 | orchestrator | 2025-06-19 10:21:41.358327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358338 | orchestrator | Thursday 19 June 2025 10:21:37 +0000 (0:00:00.207) 0:00:49.559 ********* 2025-06-19 10:21:41.358349 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:41.358360 | orchestrator | 2025-06-19 10:21:41.358371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358382 | orchestrator | Thursday 19 June 2025 10:21:37 +0000 (0:00:00.186) 0:00:49.745 ********* 2025-06-19 10:21:41.358392 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:41.358403 | orchestrator | 2025-06-19 10:21:41.358414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358425 | orchestrator | Thursday 19 June 2025 10:21:37 +0000 (0:00:00.188) 0:00:49.934 ********* 2025-06-19 10:21:41.358436 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:41.358447 | orchestrator | 2025-06-19 10:21:41.358458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358469 | orchestrator | Thursday 19 June 2025 10:21:38 +0000 (0:00:00.629) 0:00:50.564 ********* 2025-06-19 10:21:41.358480 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:41.358490 | orchestrator | 2025-06-19 10:21:41.358501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358512 | orchestrator | Thursday 19 June 2025 10:21:38 +0000 (0:00:00.191) 0:00:50.756 ********* 2025-06-19 10:21:41.358523 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:41.358534 | orchestrator | 2025-06-19 10:21:41.358545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358556 | orchestrator | Thursday 19 June 2025 10:21:38 +0000 (0:00:00.200) 0:00:50.957 ********* 2025-06-19 10:21:41.358566 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e) 2025-06-19 10:21:41.358579 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e) 2025-06-19 10:21:41.358589 | orchestrator | 2025-06-19 10:21:41.358600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358611 | orchestrator | Thursday 19 June 2025 10:21:39 +0000 (0:00:00.444) 0:00:51.401 ********* 2025-06-19 10:21:41.358622 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f) 2025-06-19 10:21:41.358641 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f) 2025-06-19 10:21:41.358680 | orchestrator | 2025-06-19 10:21:41.358691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358702 | orchestrator | Thursday 19 June 2025 10:21:39 +0000 (0:00:00.418) 0:00:51.819 ********* 2025-06-19 10:21:41.358713 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d) 2025-06-19 10:21:41.358724 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d) 2025-06-19 10:21:41.358734 | orchestrator | 2025-06-19 10:21:41.358745 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358756 | orchestrator | Thursday 19 June 2025 10:21:40 +0000 (0:00:00.411) 0:00:52.231 ********* 2025-06-19 10:21:41.358767 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6) 2025-06-19 10:21:41.358777 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6) 2025-06-19 10:21:41.358788 | orchestrator | 2025-06-19 10:21:41.358798 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-19 10:21:41.358809 | orchestrator | Thursday 19 June 2025 10:21:40 +0000 (0:00:00.426) 0:00:52.658 ********* 2025-06-19 10:21:41.358820 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-19 10:21:41.358831 | orchestrator | 2025-06-19 10:21:41.358841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:41.358852 | orchestrator | Thursday 19 June 2025 10:21:40 +0000 (0:00:00.316) 0:00:52.975 ********* 2025-06-19 10:21:41.358863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-19 10:21:41.358873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-19 10:21:41.358884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-19 10:21:41.358900 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-19 10:21:41.358910 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-19 10:21:41.358921 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-19 10:21:41.358932 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-19 10:21:41.358942 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-19 10:21:41.358953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-19 10:21:41.358964 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-19 10:21:41.358975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-19 10:21:41.358992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-19 10:21:50.291670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-19 10:21:50.291782 | orchestrator | 2025-06-19 10:21:50.291799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.291812 | orchestrator | Thursday 19 June 2025 10:21:41 +0000 (0:00:00.409) 0:00:53.385 ********* 2025-06-19 10:21:50.291823 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.291835 | orchestrator | 2025-06-19 10:21:50.291846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.291857 | orchestrator | Thursday 19 June 2025 10:21:41 +0000 (0:00:00.199) 0:00:53.584 ********* 2025-06-19 10:21:50.291868 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.291879 | orchestrator | 2025-06-19 10:21:50.291889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.291938 | orchestrator | Thursday 19 June 2025 10:21:41 +0000 (0:00:00.191) 0:00:53.776 ********* 2025-06-19 10:21:50.291959 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.291977 | orchestrator | 2025-06-19 10:21:50.291994 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.292012 | orchestrator | Thursday 19 June 2025 10:21:42 +0000 (0:00:00.583) 0:00:54.360 ********* 2025-06-19 10:21:50.292030 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.292047 | orchestrator | 2025-06-19 10:21:50.292066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.292083 | orchestrator | Thursday 19 June 2025 10:21:42 +0000 (0:00:00.206) 0:00:54.566 ********* 2025-06-19 10:21:50.292102 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.292120 | orchestrator | 2025-06-19 10:21:50.292141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.292160 | orchestrator | Thursday 19 June 2025 10:21:42 +0000 (0:00:00.199) 0:00:54.766 ********* 2025-06-19 10:21:50.292179 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.292198 | orchestrator | 2025-06-19 10:21:50.292218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.292233 | orchestrator | Thursday 19 June 2025 10:21:42 +0000 (0:00:00.221) 0:00:54.987 ********* 2025-06-19 10:21:50.292245 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.292256 | orchestrator | 2025-06-19 10:21:50.292266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.292277 | orchestrator | Thursday 19 June 2025 10:21:43 +0000 (0:00:00.198) 0:00:55.186 ********* 2025-06-19 10:21:50.292288 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.292298 | orchestrator | 2025-06-19 10:21:50.292309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.292320 | orchestrator | Thursday 19 June 2025 10:21:43 +0000 (0:00:00.199) 0:00:55.385 ********* 2025-06-19 10:21:50.292330 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-19 10:21:50.292342 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-19 10:21:50.292354 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-19 10:21:50.292373 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-19 10:21:50.292391 | orchestrator | 2025-06-19 10:21:50.292410 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.292429 | orchestrator | Thursday 19 June 2025 10:21:43 +0000 (0:00:00.632) 0:00:56.018 ********* 2025-06-19 10:21:50.292447 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.292465 | orchestrator | 2025-06-19 10:21:50.292484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.292503 | orchestrator | Thursday 19 June 2025 10:21:44 +0000 (0:00:00.194) 0:00:56.213 ********* 2025-06-19 10:21:50.292521 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.292541 | orchestrator | 2025-06-19 10:21:50.292584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.292603 | orchestrator | Thursday 19 June 2025 10:21:44 +0000 (0:00:00.195) 0:00:56.408 ********* 2025-06-19 10:21:50.292622 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.292640 | orchestrator | 2025-06-19 10:21:50.292658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-19 10:21:50.292676 | orchestrator | Thursday 19 June 2025 10:21:44 +0000 (0:00:00.198) 0:00:56.607 ********* 2025-06-19 10:21:50.292695 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.292714 | orchestrator | 2025-06-19 10:21:50.292727 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-19 10:21:50.292738 | orchestrator | Thursday 19 June 2025 10:21:44 +0000 (0:00:00.204) 0:00:56.811 ********* 2025-06-19 10:21:50.292748 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.292759 | orchestrator | 2025-06-19 10:21:50.292769 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-19 10:21:50.292794 | orchestrator | Thursday 19 June 2025 10:21:44 +0000 (0:00:00.147) 0:00:56.959 ********* 2025-06-19 10:21:50.292830 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3c3fffd7-e076-56d5-815a-37625d7b3693'}}) 2025-06-19 10:21:50.292852 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'eebf63d4-54bc-5b4a-b141-3683d252bf06'}}) 2025-06-19 10:21:50.292871 | orchestrator | 2025-06-19 10:21:50.292889 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-19 10:21:50.292909 | orchestrator | Thursday 19 June 2025 10:21:45 +0000 (0:00:00.392) 0:00:57.351 ********* 2025-06-19 10:21:50.292928 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'}) 2025-06-19 10:21:50.292950 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'}) 2025-06-19 10:21:50.292969 | orchestrator | 2025-06-19 10:21:50.292987 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-19 10:21:50.293033 | orchestrator | Thursday 19 June 2025 10:21:47 +0000 (0:00:01.913) 0:00:59.265 ********* 2025-06-19 10:21:50.293055 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:50.293075 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:50.293093 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.293111 | orchestrator | 2025-06-19 10:21:50.293129 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-19 10:21:50.293148 | orchestrator | Thursday 19 June 2025 10:21:47 +0000 (0:00:00.160) 0:00:59.426 ********* 2025-06-19 10:21:50.293167 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'}) 2025-06-19 10:21:50.293178 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'}) 2025-06-19 10:21:50.293189 | orchestrator | 2025-06-19 10:21:50.293199 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-19 10:21:50.293210 | orchestrator | Thursday 19 June 2025 10:21:48 +0000 (0:00:01.337) 0:01:00.763 ********* 2025-06-19 10:21:50.293221 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:50.293232 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:50.293242 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.293253 | orchestrator | 2025-06-19 10:21:50.293263 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-19 10:21:50.293274 | orchestrator | Thursday 19 June 2025 10:21:48 +0000 (0:00:00.150) 0:01:00.914 ********* 2025-06-19 10:21:50.293285 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.293295 | orchestrator | 2025-06-19 10:21:50.293306 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-19 10:21:50.293317 | orchestrator | Thursday 19 June 2025 10:21:49 +0000 (0:00:00.146) 0:01:01.060 ********* 2025-06-19 10:21:50.293327 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:50.293338 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:50.293349 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.293369 | orchestrator | 2025-06-19 10:21:50.293380 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-19 10:21:50.293391 | orchestrator | Thursday 19 June 2025 10:21:49 +0000 (0:00:00.152) 0:01:01.213 ********* 2025-06-19 10:21:50.293401 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.293412 | orchestrator | 2025-06-19 10:21:50.293422 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-19 10:21:50.293433 | orchestrator | Thursday 19 June 2025 10:21:49 +0000 (0:00:00.158) 0:01:01.372 ********* 2025-06-19 10:21:50.293444 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:50.293454 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:50.293465 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.293476 | orchestrator | 2025-06-19 10:21:50.293486 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-19 10:21:50.293497 | orchestrator | Thursday 19 June 2025 10:21:49 +0000 (0:00:00.155) 0:01:01.527 ********* 2025-06-19 10:21:50.293508 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.293518 | orchestrator | 2025-06-19 10:21:50.293529 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-19 10:21:50.293546 | orchestrator | Thursday 19 June 2025 10:21:49 +0000 (0:00:00.144) 0:01:01.671 ********* 2025-06-19 10:21:50.293585 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:50.293600 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:50.293611 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:50.293621 | orchestrator | 2025-06-19 10:21:50.293632 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-19 10:21:50.293642 | orchestrator | Thursday 19 June 2025 10:21:49 +0000 (0:00:00.154) 0:01:01.826 ********* 2025-06-19 10:21:50.293653 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:21:50.293664 | orchestrator | 2025-06-19 10:21:50.293675 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-19 10:21:50.293685 | orchestrator | Thursday 19 June 2025 10:21:49 +0000 (0:00:00.143) 0:01:01.969 ********* 2025-06-19 10:21:50.293705 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:56.367662 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:56.367773 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.367790 | orchestrator | 2025-06-19 10:21:56.367802 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-19 10:21:56.367815 | orchestrator | Thursday 19 June 2025 10:21:50 +0000 (0:00:00.358) 0:01:02.327 ********* 2025-06-19 10:21:56.367826 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:56.367837 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:56.367848 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.367859 | orchestrator | 2025-06-19 10:21:56.367870 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-19 10:21:56.367880 | orchestrator | Thursday 19 June 2025 10:21:50 +0000 (0:00:00.159) 0:01:02.487 ********* 2025-06-19 10:21:56.367891 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:56.367923 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:56.367934 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.367945 | orchestrator | 2025-06-19 10:21:56.367956 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-19 10:21:56.367966 | orchestrator | Thursday 19 June 2025 10:21:50 +0000 (0:00:00.149) 0:01:02.637 ********* 2025-06-19 10:21:56.367977 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.367987 | orchestrator | 2025-06-19 10:21:56.367998 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-19 10:21:56.368009 | orchestrator | Thursday 19 June 2025 10:21:50 +0000 (0:00:00.151) 0:01:02.789 ********* 2025-06-19 10:21:56.368019 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.368030 | orchestrator | 2025-06-19 10:21:56.368040 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-19 10:21:56.368051 | orchestrator | Thursday 19 June 2025 10:21:50 +0000 (0:00:00.140) 0:01:02.930 ********* 2025-06-19 10:21:56.368061 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.368072 | orchestrator | 2025-06-19 10:21:56.368083 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-19 10:21:56.368093 | orchestrator | Thursday 19 June 2025 10:21:51 +0000 (0:00:00.141) 0:01:03.071 ********* 2025-06-19 10:21:56.368104 | orchestrator | ok: [testbed-node-5] => { 2025-06-19 10:21:56.368115 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-19 10:21:56.368126 | orchestrator | } 2025-06-19 10:21:56.368137 | orchestrator | 2025-06-19 10:21:56.368149 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-19 10:21:56.368161 | orchestrator | Thursday 19 June 2025 10:21:51 +0000 (0:00:00.145) 0:01:03.217 ********* 2025-06-19 10:21:56.368172 | orchestrator | ok: [testbed-node-5] => { 2025-06-19 10:21:56.368185 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-19 10:21:56.368201 | orchestrator | } 2025-06-19 10:21:56.368221 | orchestrator | 2025-06-19 10:21:56.368240 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-19 10:21:56.368261 | orchestrator | Thursday 19 June 2025 10:21:51 +0000 (0:00:00.171) 0:01:03.389 ********* 2025-06-19 10:21:56.368281 | orchestrator | ok: [testbed-node-5] => { 2025-06-19 10:21:56.368300 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-19 10:21:56.368313 | orchestrator | } 2025-06-19 10:21:56.368324 | orchestrator | 2025-06-19 10:21:56.368334 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-19 10:21:56.368345 | orchestrator | Thursday 19 June 2025 10:21:51 +0000 (0:00:00.153) 0:01:03.542 ********* 2025-06-19 10:21:56.368356 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:21:56.368366 | orchestrator | 2025-06-19 10:21:56.368377 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-19 10:21:56.368387 | orchestrator | Thursday 19 June 2025 10:21:51 +0000 (0:00:00.494) 0:01:04.037 ********* 2025-06-19 10:21:56.368398 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:21:56.368409 | orchestrator | 2025-06-19 10:21:56.368419 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-19 10:21:56.368430 | orchestrator | Thursday 19 June 2025 10:21:52 +0000 (0:00:00.513) 0:01:04.550 ********* 2025-06-19 10:21:56.368440 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:21:56.368451 | orchestrator | 2025-06-19 10:21:56.368461 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-19 10:21:56.368472 | orchestrator | Thursday 19 June 2025 10:21:53 +0000 (0:00:00.498) 0:01:05.049 ********* 2025-06-19 10:21:56.368482 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:21:56.368521 | orchestrator | 2025-06-19 10:21:56.368535 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-19 10:21:56.368546 | orchestrator | Thursday 19 June 2025 10:21:53 +0000 (0:00:00.357) 0:01:05.406 ********* 2025-06-19 10:21:56.368568 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.368579 | orchestrator | 2025-06-19 10:21:56.368589 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-19 10:21:56.368615 | orchestrator | Thursday 19 June 2025 10:21:53 +0000 (0:00:00.106) 0:01:05.512 ********* 2025-06-19 10:21:56.368626 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.368636 | orchestrator | 2025-06-19 10:21:56.368647 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-19 10:21:56.368658 | orchestrator | Thursday 19 June 2025 10:21:53 +0000 (0:00:00.119) 0:01:05.632 ********* 2025-06-19 10:21:56.368669 | orchestrator | ok: [testbed-node-5] => { 2025-06-19 10:21:56.368680 | orchestrator |  "vgs_report": { 2025-06-19 10:21:56.368690 | orchestrator |  "vg": [] 2025-06-19 10:21:56.368719 | orchestrator |  } 2025-06-19 10:21:56.368730 | orchestrator | } 2025-06-19 10:21:56.368741 | orchestrator | 2025-06-19 10:21:56.368751 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-19 10:21:56.368762 | orchestrator | Thursday 19 June 2025 10:21:53 +0000 (0:00:00.139) 0:01:05.772 ********* 2025-06-19 10:21:56.368773 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.368783 | orchestrator | 2025-06-19 10:21:56.368794 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-19 10:21:56.368804 | orchestrator | Thursday 19 June 2025 10:21:53 +0000 (0:00:00.120) 0:01:05.892 ********* 2025-06-19 10:21:56.368815 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.368825 | orchestrator | 2025-06-19 10:21:56.368836 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-19 10:21:56.368846 | orchestrator | Thursday 19 June 2025 10:21:53 +0000 (0:00:00.139) 0:01:06.032 ********* 2025-06-19 10:21:56.368857 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.368867 | orchestrator | 2025-06-19 10:21:56.368878 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-19 10:21:56.368888 | orchestrator | Thursday 19 June 2025 10:21:54 +0000 (0:00:00.142) 0:01:06.174 ********* 2025-06-19 10:21:56.368899 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.368909 | orchestrator | 2025-06-19 10:21:56.368920 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-19 10:21:56.368930 | orchestrator | Thursday 19 June 2025 10:21:54 +0000 (0:00:00.154) 0:01:06.329 ********* 2025-06-19 10:21:56.368941 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.368951 | orchestrator | 2025-06-19 10:21:56.368962 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-19 10:21:56.368972 | orchestrator | Thursday 19 June 2025 10:21:54 +0000 (0:00:00.131) 0:01:06.460 ********* 2025-06-19 10:21:56.368983 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.368993 | orchestrator | 2025-06-19 10:21:56.369004 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-19 10:21:56.369014 | orchestrator | Thursday 19 June 2025 10:21:54 +0000 (0:00:00.144) 0:01:06.604 ********* 2025-06-19 10:21:56.369025 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.369035 | orchestrator | 2025-06-19 10:21:56.369046 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-19 10:21:56.369057 | orchestrator | Thursday 19 June 2025 10:21:54 +0000 (0:00:00.143) 0:01:06.747 ********* 2025-06-19 10:21:56.369067 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.369077 | orchestrator | 2025-06-19 10:21:56.369088 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-19 10:21:56.369099 | orchestrator | Thursday 19 June 2025 10:21:54 +0000 (0:00:00.137) 0:01:06.885 ********* 2025-06-19 10:21:56.369109 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.369120 | orchestrator | 2025-06-19 10:21:56.369130 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-19 10:21:56.369141 | orchestrator | Thursday 19 June 2025 10:21:55 +0000 (0:00:00.339) 0:01:07.224 ********* 2025-06-19 10:21:56.369151 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.369169 | orchestrator | 2025-06-19 10:21:56.369180 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-19 10:21:56.369190 | orchestrator | Thursday 19 June 2025 10:21:55 +0000 (0:00:00.143) 0:01:07.367 ********* 2025-06-19 10:21:56.369201 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.369211 | orchestrator | 2025-06-19 10:21:56.369222 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-19 10:21:56.369232 | orchestrator | Thursday 19 June 2025 10:21:55 +0000 (0:00:00.147) 0:01:07.515 ********* 2025-06-19 10:21:56.369243 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.369253 | orchestrator | 2025-06-19 10:21:56.369264 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-19 10:21:56.369275 | orchestrator | Thursday 19 June 2025 10:21:55 +0000 (0:00:00.138) 0:01:07.653 ********* 2025-06-19 10:21:56.369285 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.369295 | orchestrator | 2025-06-19 10:21:56.369306 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-19 10:21:56.369317 | orchestrator | Thursday 19 June 2025 10:21:55 +0000 (0:00:00.145) 0:01:07.799 ********* 2025-06-19 10:21:56.369327 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.369338 | orchestrator | 2025-06-19 10:21:56.369348 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-19 10:21:56.369359 | orchestrator | Thursday 19 June 2025 10:21:55 +0000 (0:00:00.139) 0:01:07.939 ********* 2025-06-19 10:21:56.369369 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:56.369385 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:56.369396 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.369406 | orchestrator | 2025-06-19 10:21:56.369417 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-19 10:21:56.369427 | orchestrator | Thursday 19 June 2025 10:21:56 +0000 (0:00:00.149) 0:01:08.088 ********* 2025-06-19 10:21:56.369438 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:56.369449 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:56.369459 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:56.369469 | orchestrator | 2025-06-19 10:21:56.369480 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-19 10:21:56.369511 | orchestrator | Thursday 19 June 2025 10:21:56 +0000 (0:00:00.151) 0:01:08.240 ********* 2025-06-19 10:21:56.369531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:59.318184 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:59.318279 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:59.318295 | orchestrator | 2025-06-19 10:21:59.318309 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-19 10:21:59.318329 | orchestrator | Thursday 19 June 2025 10:21:56 +0000 (0:00:00.162) 0:01:08.403 ********* 2025-06-19 10:21:59.318350 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:59.318371 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:59.318390 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:59.318438 | orchestrator | 2025-06-19 10:21:59.318460 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-19 10:21:59.318507 | orchestrator | Thursday 19 June 2025 10:21:56 +0000 (0:00:00.161) 0:01:08.565 ********* 2025-06-19 10:21:59.318518 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:59.318530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:59.318540 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:59.318551 | orchestrator | 2025-06-19 10:21:59.318562 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-19 10:21:59.318572 | orchestrator | Thursday 19 June 2025 10:21:56 +0000 (0:00:00.155) 0:01:08.720 ********* 2025-06-19 10:21:59.318583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:59.318594 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:59.318605 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:59.318615 | orchestrator | 2025-06-19 10:21:59.318626 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-19 10:21:59.318637 | orchestrator | Thursday 19 June 2025 10:21:56 +0000 (0:00:00.148) 0:01:08.869 ********* 2025-06-19 10:21:59.318648 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:59.318658 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:59.318669 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:59.318682 | orchestrator | 2025-06-19 10:21:59.318694 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-19 10:21:59.318706 | orchestrator | Thursday 19 June 2025 10:21:57 +0000 (0:00:00.354) 0:01:09.223 ********* 2025-06-19 10:21:59.318718 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:59.318731 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:59.318743 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:59.318755 | orchestrator | 2025-06-19 10:21:59.318767 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-19 10:21:59.318779 | orchestrator | Thursday 19 June 2025 10:21:57 +0000 (0:00:00.160) 0:01:09.384 ********* 2025-06-19 10:21:59.318789 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:21:59.318801 | orchestrator | 2025-06-19 10:21:59.318828 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-19 10:21:59.318840 | orchestrator | Thursday 19 June 2025 10:21:57 +0000 (0:00:00.507) 0:01:09.891 ********* 2025-06-19 10:21:59.318851 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:21:59.318861 | orchestrator | 2025-06-19 10:21:59.318872 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-19 10:21:59.318883 | orchestrator | Thursday 19 June 2025 10:21:58 +0000 (0:00:00.512) 0:01:10.404 ********* 2025-06-19 10:21:59.318894 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:21:59.318904 | orchestrator | 2025-06-19 10:21:59.318915 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-19 10:21:59.318926 | orchestrator | Thursday 19 June 2025 10:21:58 +0000 (0:00:00.142) 0:01:10.547 ********* 2025-06-19 10:21:59.318937 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'vg_name': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'}) 2025-06-19 10:21:59.318957 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'vg_name': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'}) 2025-06-19 10:21:59.318968 | orchestrator | 2025-06-19 10:21:59.318979 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-19 10:21:59.318989 | orchestrator | Thursday 19 June 2025 10:21:58 +0000 (0:00:00.158) 0:01:10.705 ********* 2025-06-19 10:21:59.319019 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:59.319031 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:59.319041 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:59.319052 | orchestrator | 2025-06-19 10:21:59.319063 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-19 10:21:59.319074 | orchestrator | Thursday 19 June 2025 10:21:58 +0000 (0:00:00.165) 0:01:10.871 ********* 2025-06-19 10:21:59.319084 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:59.319095 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:59.319106 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:59.319117 | orchestrator | 2025-06-19 10:21:59.319127 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-19 10:21:59.319138 | orchestrator | Thursday 19 June 2025 10:21:58 +0000 (0:00:00.164) 0:01:11.035 ********* 2025-06-19 10:21:59.319148 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'})  2025-06-19 10:21:59.319159 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'})  2025-06-19 10:21:59.319170 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:21:59.319180 | orchestrator | 2025-06-19 10:21:59.319191 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-19 10:21:59.319201 | orchestrator | Thursday 19 June 2025 10:21:59 +0000 (0:00:00.158) 0:01:11.194 ********* 2025-06-19 10:21:59.319212 | orchestrator | ok: [testbed-node-5] => { 2025-06-19 10:21:59.319223 | orchestrator |  "lvm_report": { 2025-06-19 10:21:59.319233 | orchestrator |  "lv": [ 2025-06-19 10:21:59.319244 | orchestrator |  { 2025-06-19 10:21:59.319254 | orchestrator |  "lv_name": "osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693", 2025-06-19 10:21:59.319266 | orchestrator |  "vg_name": "ceph-3c3fffd7-e076-56d5-815a-37625d7b3693" 2025-06-19 10:21:59.319276 | orchestrator |  }, 2025-06-19 10:21:59.319287 | orchestrator |  { 2025-06-19 10:21:59.319297 | orchestrator |  "lv_name": "osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06", 2025-06-19 10:21:59.319308 | orchestrator |  "vg_name": "ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06" 2025-06-19 10:21:59.319319 | orchestrator |  } 2025-06-19 10:21:59.319329 | orchestrator |  ], 2025-06-19 10:21:59.319340 | orchestrator |  "pv": [ 2025-06-19 10:21:59.319350 | orchestrator |  { 2025-06-19 10:21:59.319360 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-19 10:21:59.319371 | orchestrator |  "vg_name": "ceph-3c3fffd7-e076-56d5-815a-37625d7b3693" 2025-06-19 10:21:59.319382 | orchestrator |  }, 2025-06-19 10:21:59.319392 | orchestrator |  { 2025-06-19 10:21:59.319402 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-19 10:21:59.319413 | orchestrator |  "vg_name": "ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06" 2025-06-19 10:21:59.319431 | orchestrator |  } 2025-06-19 10:21:59.319441 | orchestrator |  ] 2025-06-19 10:21:59.319452 | orchestrator |  } 2025-06-19 10:21:59.319486 | orchestrator | } 2025-06-19 10:21:59.319508 | orchestrator | 2025-06-19 10:21:59.319530 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:21:59.319551 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-19 10:21:59.319572 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-19 10:21:59.319590 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-19 10:21:59.319601 | orchestrator | 2025-06-19 10:21:59.319612 | orchestrator | 2025-06-19 10:21:59.319623 | orchestrator | 2025-06-19 10:21:59.319634 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:21:59.319644 | orchestrator | Thursday 19 June 2025 10:21:59 +0000 (0:00:00.134) 0:01:11.328 ********* 2025-06-19 10:21:59.319655 | orchestrator | =============================================================================== 2025-06-19 10:21:59.319666 | orchestrator | Create block VGs -------------------------------------------------------- 5.64s 2025-06-19 10:21:59.319676 | orchestrator | Create block LVs -------------------------------------------------------- 4.09s 2025-06-19 10:21:59.319687 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.88s 2025-06-19 10:21:59.319697 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2025-06-19 10:21:59.319708 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.56s 2025-06-19 10:21:59.319718 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.55s 2025-06-19 10:21:59.319729 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.53s 2025-06-19 10:21:59.319740 | orchestrator | Add known partitions to the list of available block devices ------------- 1.39s 2025-06-19 10:21:59.319757 | orchestrator | Add known links to the list of available block devices ------------------ 1.17s 2025-06-19 10:21:59.651573 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2025-06-19 10:21:59.651669 | orchestrator | Print LVM report data --------------------------------------------------- 0.92s 2025-06-19 10:21:59.651683 | orchestrator | Add known partitions to the list of available block devices ------------- 0.81s 2025-06-19 10:21:59.651694 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.77s 2025-06-19 10:21:59.651705 | orchestrator | Add known links to the list of available block devices ------------------ 0.76s 2025-06-19 10:21:59.651716 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.74s 2025-06-19 10:21:59.651726 | orchestrator | Get initial list of available block devices ----------------------------- 0.69s 2025-06-19 10:21:59.651737 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.69s 2025-06-19 10:21:59.651747 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.68s 2025-06-19 10:21:59.651758 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.67s 2025-06-19 10:21:59.651768 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.67s 2025-06-19 10:22:01.491609 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:22:01.491734 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:22:01.491760 | orchestrator | Registering Redlock._release_script 2025-06-19 10:22:01.551948 | orchestrator | 2025-06-19 10:22:01 | INFO  | Task a1ce2690-46f4-475c-91a9-1c579eb5c581 (facts) was prepared for execution. 2025-06-19 10:22:01.552036 | orchestrator | 2025-06-19 10:22:01 | INFO  | It takes a moment until task a1ce2690-46f4-475c-91a9-1c579eb5c581 (facts) has been started and output is visible here. 2025-06-19 10:22:14.106187 | orchestrator | 2025-06-19 10:22:14.106308 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-19 10:22:14.106368 | orchestrator | 2025-06-19 10:22:14.106381 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-19 10:22:14.106393 | orchestrator | Thursday 19 June 2025 10:22:05 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-06-19 10:22:14.106404 | orchestrator | ok: [testbed-manager] 2025-06-19 10:22:14.106416 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:22:14.106427 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:22:14.106438 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:22:14.106449 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:22:14.106459 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:22:14.106471 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:22:14.106482 | orchestrator | 2025-06-19 10:22:14.106493 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-19 10:22:14.106503 | orchestrator | Thursday 19 June 2025 10:22:06 +0000 (0:00:01.001) 0:00:01.250 ********* 2025-06-19 10:22:14.106514 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:22:14.106525 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:22:14.106536 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:22:14.106547 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:22:14.106557 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:22:14.106568 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:22:14.106579 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:22:14.106590 | orchestrator | 2025-06-19 10:22:14.106600 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-19 10:22:14.106611 | orchestrator | 2025-06-19 10:22:14.106622 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-19 10:22:14.106633 | orchestrator | Thursday 19 June 2025 10:22:07 +0000 (0:00:01.191) 0:00:02.442 ********* 2025-06-19 10:22:14.106644 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:22:14.106654 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:22:14.106665 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:22:14.106676 | orchestrator | ok: [testbed-manager] 2025-06-19 10:22:14.106686 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:22:14.106697 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:22:14.106708 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:22:14.106718 | orchestrator | 2025-06-19 10:22:14.106731 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-19 10:22:14.106743 | orchestrator | 2025-06-19 10:22:14.106755 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-19 10:22:14.106768 | orchestrator | Thursday 19 June 2025 10:22:13 +0000 (0:00:05.634) 0:00:08.076 ********* 2025-06-19 10:22:14.106779 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:22:14.106791 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:22:14.106804 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:22:14.106816 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:22:14.106828 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:22:14.106840 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:22:14.106852 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:22:14.106864 | orchestrator | 2025-06-19 10:22:14.106876 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:22:14.106889 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:22:14.106902 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:22:14.106914 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:22:14.106926 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:22:14.106966 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:22:14.106978 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:22:14.106990 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:22:14.107002 | orchestrator | 2025-06-19 10:22:14.107014 | orchestrator | 2025-06-19 10:22:14.107026 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:22:14.107039 | orchestrator | Thursday 19 June 2025 10:22:13 +0000 (0:00:00.521) 0:00:08.598 ********* 2025-06-19 10:22:14.107051 | orchestrator | =============================================================================== 2025-06-19 10:22:14.107063 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.63s 2025-06-19 10:22:14.107074 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.19s 2025-06-19 10:22:14.107085 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.00s 2025-06-19 10:22:14.107096 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-06-19 10:22:14.382796 | orchestrator | 2025-06-19 10:22:14.386420 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Jun 19 10:22:14 UTC 2025 2025-06-19 10:22:14.386467 | orchestrator | 2025-06-19 10:22:16.054235 | orchestrator | 2025-06-19 10:22:16 | INFO  | Collection nutshell is prepared for execution 2025-06-19 10:22:16.054388 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [0] - dotfiles 2025-06-19 10:22:16.058684 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:22:16.058711 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:22:16.058722 | orchestrator | Registering Redlock._release_script 2025-06-19 10:22:16.062887 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [0] - homer 2025-06-19 10:22:16.062913 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [0] - netdata 2025-06-19 10:22:16.062946 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [0] - openstackclient 2025-06-19 10:22:16.062958 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [0] - phpmyadmin 2025-06-19 10:22:16.062969 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [0] - common 2025-06-19 10:22:16.064594 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [1] -- loadbalancer 2025-06-19 10:22:16.064699 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [2] --- opensearch 2025-06-19 10:22:16.064715 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [2] --- mariadb-ng 2025-06-19 10:22:16.064914 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [3] ---- horizon 2025-06-19 10:22:16.065096 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [3] ---- keystone 2025-06-19 10:22:16.065114 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [4] ----- neutron 2025-06-19 10:22:16.065125 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [5] ------ wait-for-nova 2025-06-19 10:22:16.065232 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [5] ------ octavia 2025-06-19 10:22:16.065633 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [4] ----- barbican 2025-06-19 10:22:16.065891 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [4] ----- designate 2025-06-19 10:22:16.065909 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [4] ----- ironic 2025-06-19 10:22:16.065920 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [4] ----- placement 2025-06-19 10:22:16.066123 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [4] ----- magnum 2025-06-19 10:22:16.066648 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [1] -- openvswitch 2025-06-19 10:22:16.066995 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [2] --- ovn 2025-06-19 10:22:16.067206 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [1] -- memcached 2025-06-19 10:22:16.067227 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [1] -- redis 2025-06-19 10:22:16.067239 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [1] -- rabbitmq-ng 2025-06-19 10:22:16.067264 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [0] - kubernetes 2025-06-19 10:22:16.068909 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [1] -- kubeconfig 2025-06-19 10:22:16.068934 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [1] -- copy-kubeconfig 2025-06-19 10:22:16.068947 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [0] - ceph 2025-06-19 10:22:16.070475 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [1] -- ceph-pools 2025-06-19 10:22:16.070498 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [2] --- copy-ceph-keys 2025-06-19 10:22:16.070509 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [3] ---- cephclient 2025-06-19 10:22:16.070683 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-19 10:22:16.070703 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [4] ----- wait-for-keystone 2025-06-19 10:22:16.070715 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-19 10:22:16.070840 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [5] ------ glance 2025-06-19 10:22:16.070924 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [5] ------ cinder 2025-06-19 10:22:16.070941 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [5] ------ nova 2025-06-19 10:22:16.071189 | orchestrator | 2025-06-19 10:22:16 | INFO  | A [4] ----- prometheus 2025-06-19 10:22:16.071210 | orchestrator | 2025-06-19 10:22:16 | INFO  | D [5] ------ grafana 2025-06-19 10:22:16.263610 | orchestrator | 2025-06-19 10:22:16 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-19 10:22:16.263731 | orchestrator | 2025-06-19 10:22:16 | INFO  | Tasks are running in the background 2025-06-19 10:22:18.624253 | orchestrator | 2025-06-19 10:22:18 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-19 10:22:20.751945 | orchestrator | 2025-06-19 10:22:20 | INFO  | Task e6603215-4351-455e-ab1f-7c7e9abd5aaf is in state STARTED 2025-06-19 10:22:20.752771 | orchestrator | 2025-06-19 10:22:20 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:20.755136 | orchestrator | 2025-06-19 10:22:20 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:20.755698 | orchestrator | 2025-06-19 10:22:20 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:20.757531 | orchestrator | 2025-06-19 10:22:20 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:20.757930 | orchestrator | 2025-06-19 10:22:20 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:20.758506 | orchestrator | 2025-06-19 10:22:20 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:20.758529 | orchestrator | 2025-06-19 10:22:20 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:23.802410 | orchestrator | 2025-06-19 10:22:23 | INFO  | Task e6603215-4351-455e-ab1f-7c7e9abd5aaf is in state STARTED 2025-06-19 10:22:23.802523 | orchestrator | 2025-06-19 10:22:23 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:23.810374 | orchestrator | 2025-06-19 10:22:23 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:23.818192 | orchestrator | 2025-06-19 10:22:23 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:23.818504 | orchestrator | 2025-06-19 10:22:23 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:23.824078 | orchestrator | 2025-06-19 10:22:23 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:23.827058 | orchestrator | 2025-06-19 10:22:23 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:23.827096 | orchestrator | 2025-06-19 10:22:23 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:26.873307 | orchestrator | 2025-06-19 10:22:26 | INFO  | Task e6603215-4351-455e-ab1f-7c7e9abd5aaf is in state STARTED 2025-06-19 10:22:26.873427 | orchestrator | 2025-06-19 10:22:26 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:26.873862 | orchestrator | 2025-06-19 10:22:26 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:26.874406 | orchestrator | 2025-06-19 10:22:26 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:26.874861 | orchestrator | 2025-06-19 10:22:26 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:26.875456 | orchestrator | 2025-06-19 10:22:26 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:26.881397 | orchestrator | 2025-06-19 10:22:26 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:26.881426 | orchestrator | 2025-06-19 10:22:26 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:29.928154 | orchestrator | 2025-06-19 10:22:29 | INFO  | Task e6603215-4351-455e-ab1f-7c7e9abd5aaf is in state STARTED 2025-06-19 10:22:29.928290 | orchestrator | 2025-06-19 10:22:29 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:29.928305 | orchestrator | 2025-06-19 10:22:29 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:29.928316 | orchestrator | 2025-06-19 10:22:29 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:29.936678 | orchestrator | 2025-06-19 10:22:29 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:29.936733 | orchestrator | 2025-06-19 10:22:29 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:29.936752 | orchestrator | 2025-06-19 10:22:29 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:29.936773 | orchestrator | 2025-06-19 10:22:29 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:32.995294 | orchestrator | 2025-06-19 10:22:32 | INFO  | Task e6603215-4351-455e-ab1f-7c7e9abd5aaf is in state STARTED 2025-06-19 10:22:32.995401 | orchestrator | 2025-06-19 10:22:32 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:32.996127 | orchestrator | 2025-06-19 10:22:32 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:32.997862 | orchestrator | 2025-06-19 10:22:32 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:33.001003 | orchestrator | 2025-06-19 10:22:32 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:33.001427 | orchestrator | 2025-06-19 10:22:32 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:33.003784 | orchestrator | 2025-06-19 10:22:32 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:33.003809 | orchestrator | 2025-06-19 10:22:32 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:36.089551 | orchestrator | 2025-06-19 10:22:36 | INFO  | Task e6603215-4351-455e-ab1f-7c7e9abd5aaf is in state STARTED 2025-06-19 10:22:36.092636 | orchestrator | 2025-06-19 10:22:36 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:36.094590 | orchestrator | 2025-06-19 10:22:36 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:36.097572 | orchestrator | 2025-06-19 10:22:36 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:36.101182 | orchestrator | 2025-06-19 10:22:36 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:36.103952 | orchestrator | 2025-06-19 10:22:36 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:36.105786 | orchestrator | 2025-06-19 10:22:36 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:36.105816 | orchestrator | 2025-06-19 10:22:36 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:39.194407 | orchestrator | 2025-06-19 10:22:39 | INFO  | Task e6603215-4351-455e-ab1f-7c7e9abd5aaf is in state STARTED 2025-06-19 10:22:39.197011 | orchestrator | 2025-06-19 10:22:39 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:39.198061 | orchestrator | 2025-06-19 10:22:39 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:39.200000 | orchestrator | 2025-06-19 10:22:39 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:39.207720 | orchestrator | 2025-06-19 10:22:39 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:39.207757 | orchestrator | 2025-06-19 10:22:39 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:39.207774 | orchestrator | 2025-06-19 10:22:39 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:39.207785 | orchestrator | 2025-06-19 10:22:39 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:42.264928 | orchestrator | 2025-06-19 10:22:42.265027 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-19 10:22:42.265045 | orchestrator | 2025-06-19 10:22:42.265120 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-19 10:22:42.265143 | orchestrator | Thursday 19 June 2025 10:22:27 +0000 (0:00:00.922) 0:00:00.922 ********* 2025-06-19 10:22:42.265154 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:22:42.265166 | orchestrator | changed: [testbed-manager] 2025-06-19 10:22:42.265177 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:22:42.265188 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:22:42.265199 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:22:42.265210 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:22:42.265220 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:22:42.265231 | orchestrator | 2025-06-19 10:22:42.265242 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-19 10:22:42.265252 | orchestrator | Thursday 19 June 2025 10:22:31 +0000 (0:00:04.340) 0:00:05.263 ********* 2025-06-19 10:22:42.265264 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-19 10:22:42.265275 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-19 10:22:42.265286 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-19 10:22:42.265297 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-19 10:22:42.265307 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-19 10:22:42.265318 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-19 10:22:42.265329 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-19 10:22:42.265339 | orchestrator | 2025-06-19 10:22:42.265373 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-19 10:22:42.265385 | orchestrator | Thursday 19 June 2025 10:22:33 +0000 (0:00:02.015) 0:00:07.278 ********* 2025-06-19 10:22:42.265401 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-19 10:22:32.636348', 'end': '2025-06-19 10:22:32.640260', 'delta': '0:00:00.003912', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-19 10:22:42.265421 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-19 10:22:32.699869', 'end': '2025-06-19 10:22:32.709509', 'delta': '0:00:00.009640', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-19 10:22:42.265433 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-19 10:22:32.840218', 'end': '2025-06-19 10:22:32.849429', 'delta': '0:00:00.009211', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-19 10:22:42.265478 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-19 10:22:33.046633', 'end': '2025-06-19 10:22:33.053004', 'delta': '0:00:00.006371', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-19 10:22:42.265491 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-19 10:22:33.402660', 'end': '2025-06-19 10:22:33.412205', 'delta': '0:00:00.009545', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-19 10:22:42.265510 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-19 10:22:33.562586', 'end': '2025-06-19 10:22:33.571740', 'delta': '0:00:00.009154', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-19 10:22:42.265522 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-19 10:22:33.684336', 'end': '2025-06-19 10:22:33.689636', 'delta': '0:00:00.005300', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-19 10:22:42.265535 | orchestrator | 2025-06-19 10:22:42.265548 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-19 10:22:42.265560 | orchestrator | Thursday 19 June 2025 10:22:35 +0000 (0:00:01.840) 0:00:09.119 ********* 2025-06-19 10:22:42.265572 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-19 10:22:42.265585 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-19 10:22:42.265604 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-19 10:22:42.265623 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-19 10:22:42.265641 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-19 10:22:42.265659 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-19 10:22:42.265683 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-19 10:22:42.265708 | orchestrator | 2025-06-19 10:22:42.265728 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-19 10:22:42.265747 | orchestrator | Thursday 19 June 2025 10:22:37 +0000 (0:00:02.214) 0:00:11.333 ********* 2025-06-19 10:22:42.265766 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-19 10:22:42.265791 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-19 10:22:42.265816 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-19 10:22:42.265834 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-19 10:22:42.265851 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-19 10:22:42.265880 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-19 10:22:42.265900 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-19 10:22:42.265918 | orchestrator | 2025-06-19 10:22:42.265935 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:22:42.265966 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:22:42.266004 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:22:42.266120 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:22:42.266147 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:22:42.266165 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:22:42.266184 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:22:42.266196 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:22:42.266206 | orchestrator | 2025-06-19 10:22:42.266217 | orchestrator | 2025-06-19 10:22:42.266228 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:22:42.266238 | orchestrator | Thursday 19 June 2025 10:22:41 +0000 (0:00:03.264) 0:00:14.598 ********* 2025-06-19 10:22:42.266249 | orchestrator | =============================================================================== 2025-06-19 10:22:42.266260 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.34s 2025-06-19 10:22:42.266270 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.26s 2025-06-19 10:22:42.266281 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.21s 2025-06-19 10:22:42.266292 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.02s 2025-06-19 10:22:42.266302 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.84s 2025-06-19 10:22:42.266313 | orchestrator | 2025-06-19 10:22:42 | INFO  | Task e6603215-4351-455e-ab1f-7c7e9abd5aaf is in state SUCCESS 2025-06-19 10:22:42.266324 | orchestrator | 2025-06-19 10:22:42 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:42.268846 | orchestrator | 2025-06-19 10:22:42 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:42.270745 | orchestrator | 2025-06-19 10:22:42 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:42.274652 | orchestrator | 2025-06-19 10:22:42 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:42.276982 | orchestrator | 2025-06-19 10:22:42 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:42.278546 | orchestrator | 2025-06-19 10:22:42 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:42.278870 | orchestrator | 2025-06-19 10:22:42 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:45.350200 | orchestrator | 2025-06-19 10:22:45 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:22:45.352760 | orchestrator | 2025-06-19 10:22:45 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:45.353168 | orchestrator | 2025-06-19 10:22:45 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:45.355771 | orchestrator | 2025-06-19 10:22:45 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:45.360365 | orchestrator | 2025-06-19 10:22:45 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:45.360393 | orchestrator | 2025-06-19 10:22:45 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:45.366145 | orchestrator | 2025-06-19 10:22:45 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:45.366187 | orchestrator | 2025-06-19 10:22:45 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:48.426667 | orchestrator | 2025-06-19 10:22:48 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:22:48.426758 | orchestrator | 2025-06-19 10:22:48 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:48.426773 | orchestrator | 2025-06-19 10:22:48 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:48.426784 | orchestrator | 2025-06-19 10:22:48 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:48.426796 | orchestrator | 2025-06-19 10:22:48 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:48.426807 | orchestrator | 2025-06-19 10:22:48 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:48.426818 | orchestrator | 2025-06-19 10:22:48 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:48.426830 | orchestrator | 2025-06-19 10:22:48 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:51.483693 | orchestrator | 2025-06-19 10:22:51 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:22:51.483751 | orchestrator | 2025-06-19 10:22:51 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:51.483859 | orchestrator | 2025-06-19 10:22:51 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:51.487351 | orchestrator | 2025-06-19 10:22:51 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:51.488950 | orchestrator | 2025-06-19 10:22:51 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:51.489530 | orchestrator | 2025-06-19 10:22:51 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:51.490243 | orchestrator | 2025-06-19 10:22:51 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:51.490339 | orchestrator | 2025-06-19 10:22:51 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:54.561865 | orchestrator | 2025-06-19 10:22:54 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:22:54.568490 | orchestrator | 2025-06-19 10:22:54 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:54.577297 | orchestrator | 2025-06-19 10:22:54 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:54.585279 | orchestrator | 2025-06-19 10:22:54 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:54.585566 | orchestrator | 2025-06-19 10:22:54 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:54.590352 | orchestrator | 2025-06-19 10:22:54 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:54.595187 | orchestrator | 2025-06-19 10:22:54 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:54.595235 | orchestrator | 2025-06-19 10:22:54 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:22:57.649156 | orchestrator | 2025-06-19 10:22:57 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:22:57.649257 | orchestrator | 2025-06-19 10:22:57 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state STARTED 2025-06-19 10:22:57.651588 | orchestrator | 2025-06-19 10:22:57 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:22:57.654128 | orchestrator | 2025-06-19 10:22:57 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:22:57.657157 | orchestrator | 2025-06-19 10:22:57 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:22:57.657189 | orchestrator | 2025-06-19 10:22:57 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:22:57.661623 | orchestrator | 2025-06-19 10:22:57 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:22:57.661648 | orchestrator | 2025-06-19 10:22:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:00.724543 | orchestrator | 2025-06-19 10:23:00 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:00.724750 | orchestrator | 2025-06-19 10:23:00 | INFO  | Task d511e747-7572-431b-b05e-36ec1764d634 is in state SUCCESS 2025-06-19 10:23:00.724770 | orchestrator | 2025-06-19 10:23:00 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:23:00.724782 | orchestrator | 2025-06-19 10:23:00 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:00.724804 | orchestrator | 2025-06-19 10:23:00 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:23:00.726006 | orchestrator | 2025-06-19 10:23:00 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:00.726431 | orchestrator | 2025-06-19 10:23:00 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:00.726453 | orchestrator | 2025-06-19 10:23:00 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:03.753312 | orchestrator | 2025-06-19 10:23:03 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:03.753427 | orchestrator | 2025-06-19 10:23:03 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:23:03.754406 | orchestrator | 2025-06-19 10:23:03 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:03.758139 | orchestrator | 2025-06-19 10:23:03 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:23:03.761425 | orchestrator | 2025-06-19 10:23:03 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:03.762690 | orchestrator | 2025-06-19 10:23:03 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:03.762921 | orchestrator | 2025-06-19 10:23:03 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:06.811215 | orchestrator | 2025-06-19 10:23:06 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:06.811297 | orchestrator | 2025-06-19 10:23:06 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:23:06.811640 | orchestrator | 2025-06-19 10:23:06 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:06.816279 | orchestrator | 2025-06-19 10:23:06 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:23:06.816304 | orchestrator | 2025-06-19 10:23:06 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:06.819725 | orchestrator | 2025-06-19 10:23:06 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:06.819800 | orchestrator | 2025-06-19 10:23:06 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:09.867953 | orchestrator | 2025-06-19 10:23:09 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:09.868040 | orchestrator | 2025-06-19 10:23:09 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:23:09.868944 | orchestrator | 2025-06-19 10:23:09 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:09.871007 | orchestrator | 2025-06-19 10:23:09 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:23:09.872941 | orchestrator | 2025-06-19 10:23:09 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:09.873056 | orchestrator | 2025-06-19 10:23:09 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:09.873761 | orchestrator | 2025-06-19 10:23:09 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:12.911131 | orchestrator | 2025-06-19 10:23:12 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:12.911215 | orchestrator | 2025-06-19 10:23:12 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:23:12.912831 | orchestrator | 2025-06-19 10:23:12 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:12.917056 | orchestrator | 2025-06-19 10:23:12 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state STARTED 2025-06-19 10:23:12.919852 | orchestrator | 2025-06-19 10:23:12 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:12.919877 | orchestrator | 2025-06-19 10:23:12 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:12.919888 | orchestrator | 2025-06-19 10:23:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:15.976814 | orchestrator | 2025-06-19 10:23:15 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:15.977865 | orchestrator | 2025-06-19 10:23:15 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:23:15.979123 | orchestrator | 2025-06-19 10:23:15 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:15.979941 | orchestrator | 2025-06-19 10:23:15 | INFO  | Task 7387afba-ddd0-4a24-9a0f-ccee237956f5 is in state SUCCESS 2025-06-19 10:23:15.982106 | orchestrator | 2025-06-19 10:23:15 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:15.982135 | orchestrator | 2025-06-19 10:23:15 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:15.982147 | orchestrator | 2025-06-19 10:23:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:19.028967 | orchestrator | 2025-06-19 10:23:19 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:19.032950 | orchestrator | 2025-06-19 10:23:19 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:23:19.032992 | orchestrator | 2025-06-19 10:23:19 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:19.035491 | orchestrator | 2025-06-19 10:23:19 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:19.035843 | orchestrator | 2025-06-19 10:23:19 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:19.036221 | orchestrator | 2025-06-19 10:23:19 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:22.095096 | orchestrator | 2025-06-19 10:23:22 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:22.095422 | orchestrator | 2025-06-19 10:23:22 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:23:22.099403 | orchestrator | 2025-06-19 10:23:22 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:22.103216 | orchestrator | 2025-06-19 10:23:22 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:22.106117 | orchestrator | 2025-06-19 10:23:22 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:22.106145 | orchestrator | 2025-06-19 10:23:22 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:25.152385 | orchestrator | 2025-06-19 10:23:25 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:25.152496 | orchestrator | 2025-06-19 10:23:25 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:23:25.154646 | orchestrator | 2025-06-19 10:23:25 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:25.157545 | orchestrator | 2025-06-19 10:23:25 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:25.161556 | orchestrator | 2025-06-19 10:23:25 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:25.161583 | orchestrator | 2025-06-19 10:23:25 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:28.210359 | orchestrator | 2025-06-19 10:23:28 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:28.213561 | orchestrator | 2025-06-19 10:23:28 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state STARTED 2025-06-19 10:23:28.215181 | orchestrator | 2025-06-19 10:23:28 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:28.218774 | orchestrator | 2025-06-19 10:23:28 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:28.218803 | orchestrator | 2025-06-19 10:23:28 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:28.218816 | orchestrator | 2025-06-19 10:23:28 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:31.272670 | orchestrator | 2025-06-19 10:23:31 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:31.272878 | orchestrator | 2025-06-19 10:23:31 | INFO  | Task a749c977-d939-45d1-91d8-1ae4e862c825 is in state SUCCESS 2025-06-19 10:23:31.274253 | orchestrator | 2025-06-19 10:23:31.274297 | orchestrator | 2025-06-19 10:23:31.274310 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-19 10:23:31.274322 | orchestrator | 2025-06-19 10:23:31.274333 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-19 10:23:31.274345 | orchestrator | Thursday 19 June 2025 10:22:26 +0000 (0:00:00.242) 0:00:00.243 ********* 2025-06-19 10:23:31.274356 | orchestrator | ok: [testbed-manager] => { 2025-06-19 10:23:31.274369 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-19 10:23:31.274381 | orchestrator | } 2025-06-19 10:23:31.274392 | orchestrator | 2025-06-19 10:23:31.274403 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-19 10:23:31.274414 | orchestrator | Thursday 19 June 2025 10:22:26 +0000 (0:00:00.134) 0:00:00.377 ********* 2025-06-19 10:23:31.274425 | orchestrator | ok: [testbed-manager] 2025-06-19 10:23:31.274436 | orchestrator | 2025-06-19 10:23:31.274447 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-19 10:23:31.274458 | orchestrator | Thursday 19 June 2025 10:22:27 +0000 (0:00:01.556) 0:00:01.934 ********* 2025-06-19 10:23:31.274469 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-19 10:23:31.274501 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-19 10:23:31.274512 | orchestrator | 2025-06-19 10:23:31.274523 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-19 10:23:31.274534 | orchestrator | Thursday 19 June 2025 10:22:28 +0000 (0:00:01.153) 0:00:03.087 ********* 2025-06-19 10:23:31.274572 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.274585 | orchestrator | 2025-06-19 10:23:31.274596 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-19 10:23:31.274607 | orchestrator | Thursday 19 June 2025 10:22:31 +0000 (0:00:02.447) 0:00:05.535 ********* 2025-06-19 10:23:31.274644 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.274655 | orchestrator | 2025-06-19 10:23:31.274667 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-19 10:23:31.274677 | orchestrator | Thursday 19 June 2025 10:22:33 +0000 (0:00:01.691) 0:00:07.227 ********* 2025-06-19 10:23:31.274688 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-19 10:23:31.274699 | orchestrator | ok: [testbed-manager] 2025-06-19 10:23:31.274710 | orchestrator | 2025-06-19 10:23:31.274721 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-19 10:23:31.274732 | orchestrator | Thursday 19 June 2025 10:22:57 +0000 (0:00:24.162) 0:00:31.390 ********* 2025-06-19 10:23:31.274816 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.274829 | orchestrator | 2025-06-19 10:23:31.274839 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:23:31.274851 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:23:31.274863 | orchestrator | 2025-06-19 10:23:31.274874 | orchestrator | 2025-06-19 10:23:31.274885 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:23:31.274896 | orchestrator | Thursday 19 June 2025 10:22:59 +0000 (0:00:01.902) 0:00:33.292 ********* 2025-06-19 10:23:31.274907 | orchestrator | =============================================================================== 2025-06-19 10:23:31.274918 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.16s 2025-06-19 10:23:31.274928 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.45s 2025-06-19 10:23:31.274939 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.90s 2025-06-19 10:23:31.274950 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.69s 2025-06-19 10:23:31.274961 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.56s 2025-06-19 10:23:31.274972 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.15s 2025-06-19 10:23:31.274983 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.13s 2025-06-19 10:23:31.274993 | orchestrator | 2025-06-19 10:23:31.275004 | orchestrator | 2025-06-19 10:23:31.275015 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-19 10:23:31.275026 | orchestrator | 2025-06-19 10:23:31.275037 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-19 10:23:31.275047 | orchestrator | Thursday 19 June 2025 10:22:28 +0000 (0:00:00.908) 0:00:00.908 ********* 2025-06-19 10:23:31.275058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-19 10:23:31.275070 | orchestrator | 2025-06-19 10:23:31.275088 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-19 10:23:31.275099 | orchestrator | Thursday 19 June 2025 10:22:29 +0000 (0:00:00.559) 0:00:01.467 ********* 2025-06-19 10:23:31.275110 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-19 10:23:31.275121 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-19 10:23:31.275132 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-19 10:23:31.275152 | orchestrator | 2025-06-19 10:23:31.275162 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-19 10:23:31.275173 | orchestrator | Thursday 19 June 2025 10:22:30 +0000 (0:00:01.514) 0:00:02.982 ********* 2025-06-19 10:23:31.275184 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.275195 | orchestrator | 2025-06-19 10:23:31.275205 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-19 10:23:31.275216 | orchestrator | Thursday 19 June 2025 10:22:32 +0000 (0:00:01.553) 0:00:04.535 ********* 2025-06-19 10:23:31.275241 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-19 10:23:31.275252 | orchestrator | ok: [testbed-manager] 2025-06-19 10:23:31.275263 | orchestrator | 2025-06-19 10:23:31.275273 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-19 10:23:31.275285 | orchestrator | Thursday 19 June 2025 10:23:07 +0000 (0:00:35.125) 0:00:39.660 ********* 2025-06-19 10:23:31.275296 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.275306 | orchestrator | 2025-06-19 10:23:31.275317 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-19 10:23:31.275328 | orchestrator | Thursday 19 June 2025 10:23:08 +0000 (0:00:01.084) 0:00:40.745 ********* 2025-06-19 10:23:31.275339 | orchestrator | ok: [testbed-manager] 2025-06-19 10:23:31.275349 | orchestrator | 2025-06-19 10:23:31.275360 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-19 10:23:31.275371 | orchestrator | Thursday 19 June 2025 10:23:09 +0000 (0:00:00.673) 0:00:41.419 ********* 2025-06-19 10:23:31.275381 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.275392 | orchestrator | 2025-06-19 10:23:31.275403 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-19 10:23:31.275414 | orchestrator | Thursday 19 June 2025 10:23:11 +0000 (0:00:01.990) 0:00:43.409 ********* 2025-06-19 10:23:31.275424 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.275435 | orchestrator | 2025-06-19 10:23:31.275447 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-19 10:23:31.275460 | orchestrator | Thursday 19 June 2025 10:23:12 +0000 (0:00:01.315) 0:00:44.724 ********* 2025-06-19 10:23:31.275471 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.275483 | orchestrator | 2025-06-19 10:23:31.275495 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-19 10:23:31.275507 | orchestrator | Thursday 19 June 2025 10:23:13 +0000 (0:00:00.795) 0:00:45.520 ********* 2025-06-19 10:23:31.275518 | orchestrator | ok: [testbed-manager] 2025-06-19 10:23:31.275530 | orchestrator | 2025-06-19 10:23:31.275542 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:23:31.275554 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:23:31.275566 | orchestrator | 2025-06-19 10:23:31.275578 | orchestrator | 2025-06-19 10:23:31.275589 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:23:31.275601 | orchestrator | Thursday 19 June 2025 10:23:13 +0000 (0:00:00.399) 0:00:45.919 ********* 2025-06-19 10:23:31.275632 | orchestrator | =============================================================================== 2025-06-19 10:23:31.275644 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.13s 2025-06-19 10:23:31.275656 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.99s 2025-06-19 10:23:31.275668 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.55s 2025-06-19 10:23:31.275680 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.51s 2025-06-19 10:23:31.275692 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.32s 2025-06-19 10:23:31.275704 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.08s 2025-06-19 10:23:31.275716 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.80s 2025-06-19 10:23:31.275736 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.67s 2025-06-19 10:23:31.275748 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.56s 2025-06-19 10:23:31.275760 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.40s 2025-06-19 10:23:31.275772 | orchestrator | 2025-06-19 10:23:31.275784 | orchestrator | 2025-06-19 10:23:31.275796 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:23:31.275806 | orchestrator | 2025-06-19 10:23:31.275817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:23:31.275828 | orchestrator | Thursday 19 June 2025 10:22:27 +0000 (0:00:00.730) 0:00:00.730 ********* 2025-06-19 10:23:31.275838 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-19 10:23:31.275849 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-19 10:23:31.275860 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-19 10:23:31.275870 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-19 10:23:31.275881 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-19 10:23:31.275891 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-19 10:23:31.275902 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-19 10:23:31.275913 | orchestrator | 2025-06-19 10:23:31.275924 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-19 10:23:31.275934 | orchestrator | 2025-06-19 10:23:31.275945 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-19 10:23:31.275956 | orchestrator | Thursday 19 June 2025 10:22:30 +0000 (0:00:02.717) 0:00:03.447 ********* 2025-06-19 10:23:31.275979 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:23:31.275998 | orchestrator | 2025-06-19 10:23:31.276009 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-19 10:23:31.276020 | orchestrator | Thursday 19 June 2025 10:22:32 +0000 (0:00:02.044) 0:00:05.491 ********* 2025-06-19 10:23:31.276030 | orchestrator | ok: [testbed-manager] 2025-06-19 10:23:31.276041 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:23:31.276051 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:23:31.276062 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:23:31.276073 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:23:31.276089 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:23:31.276100 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:23:31.276111 | orchestrator | 2025-06-19 10:23:31.276122 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-19 10:23:31.276132 | orchestrator | Thursday 19 June 2025 10:22:34 +0000 (0:00:02.801) 0:00:08.293 ********* 2025-06-19 10:23:31.276143 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:23:31.276154 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:23:31.276164 | orchestrator | ok: [testbed-manager] 2025-06-19 10:23:31.276175 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:23:31.276185 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:23:31.276196 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:23:31.276206 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:23:31.276217 | orchestrator | 2025-06-19 10:23:31.276227 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-19 10:23:31.276238 | orchestrator | Thursday 19 June 2025 10:22:37 +0000 (0:00:03.013) 0:00:11.307 ********* 2025-06-19 10:23:31.276248 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.276259 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:23:31.276270 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:23:31.276280 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:23:31.276291 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:23:31.276308 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:23:31.276319 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:23:31.276329 | orchestrator | 2025-06-19 10:23:31.276340 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-19 10:23:31.276351 | orchestrator | Thursday 19 June 2025 10:22:40 +0000 (0:00:02.992) 0:00:14.299 ********* 2025-06-19 10:23:31.276361 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.276372 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:23:31.276382 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:23:31.276393 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:23:31.276403 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:23:31.276414 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:23:31.276424 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:23:31.276435 | orchestrator | 2025-06-19 10:23:31.276446 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-19 10:23:31.276456 | orchestrator | Thursday 19 June 2025 10:22:50 +0000 (0:00:09.875) 0:00:24.174 ********* 2025-06-19 10:23:31.276467 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.276478 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:23:31.276488 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:23:31.276499 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:23:31.276509 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:23:31.276520 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:23:31.276530 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:23:31.276541 | orchestrator | 2025-06-19 10:23:31.276552 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-19 10:23:31.276563 | orchestrator | Thursday 19 June 2025 10:23:08 +0000 (0:00:17.186) 0:00:41.360 ********* 2025-06-19 10:23:31.276604 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:23:31.276654 | orchestrator | 2025-06-19 10:23:31.276666 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-19 10:23:31.276677 | orchestrator | Thursday 19 June 2025 10:23:09 +0000 (0:00:01.678) 0:00:43.039 ********* 2025-06-19 10:23:31.276688 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-19 10:23:31.276699 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-19 10:23:31.276710 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-19 10:23:31.276720 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-19 10:23:31.276731 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-19 10:23:31.276741 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-19 10:23:31.276752 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-19 10:23:31.276762 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-19 10:23:31.276773 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-19 10:23:31.276783 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-19 10:23:31.276794 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-19 10:23:31.276804 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-19 10:23:31.276815 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-19 10:23:31.276826 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-19 10:23:31.276836 | orchestrator | 2025-06-19 10:23:31.276847 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-19 10:23:31.276858 | orchestrator | Thursday 19 June 2025 10:23:14 +0000 (0:00:05.089) 0:00:48.128 ********* 2025-06-19 10:23:31.276873 | orchestrator | ok: [testbed-manager] 2025-06-19 10:23:31.276884 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:23:31.276895 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:23:31.276905 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:23:31.276922 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:23:31.276933 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:23:31.276943 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:23:31.276954 | orchestrator | 2025-06-19 10:23:31.276965 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-19 10:23:31.276975 | orchestrator | Thursday 19 June 2025 10:23:16 +0000 (0:00:01.271) 0:00:49.399 ********* 2025-06-19 10:23:31.276986 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.276997 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:23:31.277007 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:23:31.277018 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:23:31.277028 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:23:31.277039 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:23:31.277049 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:23:31.277060 | orchestrator | 2025-06-19 10:23:31.277071 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-19 10:23:31.277088 | orchestrator | Thursday 19 June 2025 10:23:17 +0000 (0:00:01.476) 0:00:50.876 ********* 2025-06-19 10:23:31.277099 | orchestrator | ok: [testbed-manager] 2025-06-19 10:23:31.277110 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:23:31.277120 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:23:31.277131 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:23:31.277142 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:23:31.277152 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:23:31.277162 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:23:31.277173 | orchestrator | 2025-06-19 10:23:31.277184 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-19 10:23:31.277194 | orchestrator | Thursday 19 June 2025 10:23:18 +0000 (0:00:01.470) 0:00:52.346 ********* 2025-06-19 10:23:31.277205 | orchestrator | ok: [testbed-manager] 2025-06-19 10:23:31.277216 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:23:31.277226 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:23:31.277236 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:23:31.277247 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:23:31.277258 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:23:31.277268 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:23:31.277278 | orchestrator | 2025-06-19 10:23:31.277289 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-19 10:23:31.277300 | orchestrator | Thursday 19 June 2025 10:23:21 +0000 (0:00:02.185) 0:00:54.532 ********* 2025-06-19 10:23:31.277311 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-19 10:23:31.277323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:23:31.277334 | orchestrator | 2025-06-19 10:23:31.277345 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-19 10:23:31.277356 | orchestrator | Thursday 19 June 2025 10:23:22 +0000 (0:00:01.620) 0:00:56.152 ********* 2025-06-19 10:23:31.277366 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.277377 | orchestrator | 2025-06-19 10:23:31.277388 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-19 10:23:31.277399 | orchestrator | Thursday 19 June 2025 10:23:24 +0000 (0:00:02.180) 0:00:58.333 ********* 2025-06-19 10:23:31.277409 | orchestrator | changed: [testbed-manager] 2025-06-19 10:23:31.277420 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:23:31.277431 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:23:31.277442 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:23:31.277452 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:23:31.277463 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:23:31.277473 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:23:31.277484 | orchestrator | 2025-06-19 10:23:31.277494 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:23:31.277505 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:23:31.277523 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:23:31.277534 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:23:31.277545 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:23:31.277556 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:23:31.277566 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:23:31.277577 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:23:31.277588 | orchestrator | 2025-06-19 10:23:31.277598 | orchestrator | 2025-06-19 10:23:31.277609 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:23:31.277739 | orchestrator | Thursday 19 June 2025 10:23:28 +0000 (0:00:03.597) 0:01:01.931 ********* 2025-06-19 10:23:31.277751 | orchestrator | =============================================================================== 2025-06-19 10:23:31.277762 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.19s 2025-06-19 10:23:31.277783 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.88s 2025-06-19 10:23:31.277795 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.09s 2025-06-19 10:23:31.277805 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.60s 2025-06-19 10:23:31.277816 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.01s 2025-06-19 10:23:31.277826 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.99s 2025-06-19 10:23:31.277837 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.80s 2025-06-19 10:23:31.277847 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.72s 2025-06-19 10:23:31.277858 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.19s 2025-06-19 10:23:31.277869 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.18s 2025-06-19 10:23:31.277879 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.04s 2025-06-19 10:23:31.277897 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.68s 2025-06-19 10:23:31.277908 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.62s 2025-06-19 10:23:31.277918 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.48s 2025-06-19 10:23:31.277929 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.47s 2025-06-19 10:23:31.277940 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.27s 2025-06-19 10:23:31.277950 | orchestrator | 2025-06-19 10:23:31 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:31.277961 | orchestrator | 2025-06-19 10:23:31 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:31.280017 | orchestrator | 2025-06-19 10:23:31 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:31.280541 | orchestrator | 2025-06-19 10:23:31 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:34.316634 | orchestrator | 2025-06-19 10:23:34 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:34.317338 | orchestrator | 2025-06-19 10:23:34 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:34.318675 | orchestrator | 2025-06-19 10:23:34 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:34.319935 | orchestrator | 2025-06-19 10:23:34 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:34.320381 | orchestrator | 2025-06-19 10:23:34 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:37.356712 | orchestrator | 2025-06-19 10:23:37 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:37.357803 | orchestrator | 2025-06-19 10:23:37 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:37.359324 | orchestrator | 2025-06-19 10:23:37 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:37.361171 | orchestrator | 2025-06-19 10:23:37 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:37.361735 | orchestrator | 2025-06-19 10:23:37 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:40.414799 | orchestrator | 2025-06-19 10:23:40 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:40.416078 | orchestrator | 2025-06-19 10:23:40 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:40.418129 | orchestrator | 2025-06-19 10:23:40 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:40.419987 | orchestrator | 2025-06-19 10:23:40 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:40.420033 | orchestrator | 2025-06-19 10:23:40 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:43.471370 | orchestrator | 2025-06-19 10:23:43 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:43.475038 | orchestrator | 2025-06-19 10:23:43 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:43.476565 | orchestrator | 2025-06-19 10:23:43 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:43.478633 | orchestrator | 2025-06-19 10:23:43 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:43.478659 | orchestrator | 2025-06-19 10:23:43 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:46.516539 | orchestrator | 2025-06-19 10:23:46 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:46.518443 | orchestrator | 2025-06-19 10:23:46 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:46.519796 | orchestrator | 2025-06-19 10:23:46 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:46.521508 | orchestrator | 2025-06-19 10:23:46 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:46.521535 | orchestrator | 2025-06-19 10:23:46 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:49.551605 | orchestrator | 2025-06-19 10:23:49 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:49.551732 | orchestrator | 2025-06-19 10:23:49 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:49.553151 | orchestrator | 2025-06-19 10:23:49 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:49.553184 | orchestrator | 2025-06-19 10:23:49 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:49.553205 | orchestrator | 2025-06-19 10:23:49 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:52.586878 | orchestrator | 2025-06-19 10:23:52 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:52.589613 | orchestrator | 2025-06-19 10:23:52 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:52.591231 | orchestrator | 2025-06-19 10:23:52 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:52.592968 | orchestrator | 2025-06-19 10:23:52 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:52.593019 | orchestrator | 2025-06-19 10:23:52 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:55.668340 | orchestrator | 2025-06-19 10:23:55 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:55.673067 | orchestrator | 2025-06-19 10:23:55 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:55.675893 | orchestrator | 2025-06-19 10:23:55 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:55.678779 | orchestrator | 2025-06-19 10:23:55 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:55.678807 | orchestrator | 2025-06-19 10:23:55 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:23:58.725831 | orchestrator | 2025-06-19 10:23:58 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:23:58.727105 | orchestrator | 2025-06-19 10:23:58 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:23:58.728043 | orchestrator | 2025-06-19 10:23:58 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:23:58.729763 | orchestrator | 2025-06-19 10:23:58 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:23:58.729853 | orchestrator | 2025-06-19 10:23:58 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:01.765593 | orchestrator | 2025-06-19 10:24:01 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state STARTED 2025-06-19 10:24:01.767669 | orchestrator | 2025-06-19 10:24:01 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:01.771067 | orchestrator | 2025-06-19 10:24:01 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:01.773031 | orchestrator | 2025-06-19 10:24:01 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:01.773115 | orchestrator | 2025-06-19 10:24:01 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:04.813325 | orchestrator | 2025-06-19 10:24:04 | INFO  | Task d84cac77-1916-43d7-be42-1fb5187c9967 is in state SUCCESS 2025-06-19 10:24:04.815364 | orchestrator | 2025-06-19 10:24:04 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:04.817627 | orchestrator | 2025-06-19 10:24:04 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:04.818893 | orchestrator | 2025-06-19 10:24:04 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:04.819480 | orchestrator | 2025-06-19 10:24:04 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:07.863105 | orchestrator | 2025-06-19 10:24:07 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:07.866271 | orchestrator | 2025-06-19 10:24:07 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:07.870712 | orchestrator | 2025-06-19 10:24:07 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:07.870782 | orchestrator | 2025-06-19 10:24:07 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:10.922921 | orchestrator | 2025-06-19 10:24:10 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:10.923146 | orchestrator | 2025-06-19 10:24:10 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:10.924216 | orchestrator | 2025-06-19 10:24:10 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:10.924240 | orchestrator | 2025-06-19 10:24:10 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:13.978863 | orchestrator | 2025-06-19 10:24:13 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:13.979600 | orchestrator | 2025-06-19 10:24:13 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:13.980440 | orchestrator | 2025-06-19 10:24:13 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:13.980466 | orchestrator | 2025-06-19 10:24:13 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:17.024599 | orchestrator | 2025-06-19 10:24:17 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:17.026201 | orchestrator | 2025-06-19 10:24:17 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:17.026981 | orchestrator | 2025-06-19 10:24:17 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:17.027007 | orchestrator | 2025-06-19 10:24:17 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:20.060748 | orchestrator | 2025-06-19 10:24:20 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:20.060854 | orchestrator | 2025-06-19 10:24:20 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:20.060869 | orchestrator | 2025-06-19 10:24:20 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:20.060881 | orchestrator | 2025-06-19 10:24:20 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:23.103152 | orchestrator | 2025-06-19 10:24:23 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:23.103831 | orchestrator | 2025-06-19 10:24:23 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:23.105507 | orchestrator | 2025-06-19 10:24:23 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:23.105539 | orchestrator | 2025-06-19 10:24:23 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:26.141685 | orchestrator | 2025-06-19 10:24:26 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:26.143509 | orchestrator | 2025-06-19 10:24:26 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:26.144937 | orchestrator | 2025-06-19 10:24:26 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:26.144964 | orchestrator | 2025-06-19 10:24:26 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:29.189385 | orchestrator | 2025-06-19 10:24:29 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:29.193062 | orchestrator | 2025-06-19 10:24:29 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:29.193961 | orchestrator | 2025-06-19 10:24:29 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:29.194255 | orchestrator | 2025-06-19 10:24:29 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:32.244759 | orchestrator | 2025-06-19 10:24:32 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:32.248987 | orchestrator | 2025-06-19 10:24:32 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:32.250998 | orchestrator | 2025-06-19 10:24:32 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:32.251116 | orchestrator | 2025-06-19 10:24:32 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:35.291278 | orchestrator | 2025-06-19 10:24:35 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:35.292940 | orchestrator | 2025-06-19 10:24:35 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:35.295003 | orchestrator | 2025-06-19 10:24:35 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:35.295043 | orchestrator | 2025-06-19 10:24:35 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:38.349264 | orchestrator | 2025-06-19 10:24:38 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:38.352020 | orchestrator | 2025-06-19 10:24:38 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:38.355297 | orchestrator | 2025-06-19 10:24:38 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:38.355329 | orchestrator | 2025-06-19 10:24:38 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:41.407739 | orchestrator | 2025-06-19 10:24:41 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:41.409243 | orchestrator | 2025-06-19 10:24:41 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:41.411562 | orchestrator | 2025-06-19 10:24:41 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:41.411597 | orchestrator | 2025-06-19 10:24:41 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:44.461859 | orchestrator | 2025-06-19 10:24:44 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:44.464564 | orchestrator | 2025-06-19 10:24:44 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:44.467940 | orchestrator | 2025-06-19 10:24:44 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:44.468002 | orchestrator | 2025-06-19 10:24:44 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:47.521857 | orchestrator | 2025-06-19 10:24:47 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:47.523157 | orchestrator | 2025-06-19 10:24:47 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:47.524605 | orchestrator | 2025-06-19 10:24:47 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:47.525111 | orchestrator | 2025-06-19 10:24:47 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:50.583561 | orchestrator | 2025-06-19 10:24:50 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:50.583779 | orchestrator | 2025-06-19 10:24:50 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:50.584828 | orchestrator | 2025-06-19 10:24:50 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:50.584857 | orchestrator | 2025-06-19 10:24:50 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:53.649909 | orchestrator | 2025-06-19 10:24:53 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:53.653695 | orchestrator | 2025-06-19 10:24:53 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:53.660026 | orchestrator | 2025-06-19 10:24:53 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:53.660100 | orchestrator | 2025-06-19 10:24:53 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:56.713360 | orchestrator | 2025-06-19 10:24:56 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:56.714510 | orchestrator | 2025-06-19 10:24:56 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:56.715712 | orchestrator | 2025-06-19 10:24:56 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:56.715734 | orchestrator | 2025-06-19 10:24:56 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:24:59.793061 | orchestrator | 2025-06-19 10:24:59 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:24:59.794085 | orchestrator | 2025-06-19 10:24:59 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:24:59.800367 | orchestrator | 2025-06-19 10:24:59 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:24:59.800394 | orchestrator | 2025-06-19 10:24:59 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:02.867329 | orchestrator | 2025-06-19 10:25:02 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:02.869988 | orchestrator | 2025-06-19 10:25:02 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:02.872373 | orchestrator | 2025-06-19 10:25:02 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:02.872537 | orchestrator | 2025-06-19 10:25:02 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:05.922290 | orchestrator | 2025-06-19 10:25:05 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:05.923373 | orchestrator | 2025-06-19 10:25:05 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:05.928526 | orchestrator | 2025-06-19 10:25:05 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:05.928818 | orchestrator | 2025-06-19 10:25:05 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:08.969321 | orchestrator | 2025-06-19 10:25:08 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:08.978382 | orchestrator | 2025-06-19 10:25:08 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:08.979518 | orchestrator | 2025-06-19 10:25:08 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:08.980986 | orchestrator | 2025-06-19 10:25:08 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:12.028167 | orchestrator | 2025-06-19 10:25:12 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:12.028540 | orchestrator | 2025-06-19 10:25:12 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:12.029614 | orchestrator | 2025-06-19 10:25:12 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:12.029635 | orchestrator | 2025-06-19 10:25:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:15.072961 | orchestrator | 2025-06-19 10:25:15 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:15.076344 | orchestrator | 2025-06-19 10:25:15 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:15.077997 | orchestrator | 2025-06-19 10:25:15 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:15.078068 | orchestrator | 2025-06-19 10:25:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:18.128365 | orchestrator | 2025-06-19 10:25:18 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:18.129274 | orchestrator | 2025-06-19 10:25:18 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:18.131400 | orchestrator | 2025-06-19 10:25:18 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:18.132258 | orchestrator | 2025-06-19 10:25:18 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:21.167191 | orchestrator | 2025-06-19 10:25:21 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:21.167584 | orchestrator | 2025-06-19 10:25:21 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:21.169008 | orchestrator | 2025-06-19 10:25:21 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:21.170133 | orchestrator | 2025-06-19 10:25:21 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:24.219016 | orchestrator | 2025-06-19 10:25:24 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:24.220573 | orchestrator | 2025-06-19 10:25:24 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:24.222263 | orchestrator | 2025-06-19 10:25:24 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:24.222301 | orchestrator | 2025-06-19 10:25:24 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:27.260794 | orchestrator | 2025-06-19 10:25:27 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:27.261074 | orchestrator | 2025-06-19 10:25:27 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:27.261873 | orchestrator | 2025-06-19 10:25:27 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:27.261899 | orchestrator | 2025-06-19 10:25:27 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:30.300029 | orchestrator | 2025-06-19 10:25:30 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:30.300987 | orchestrator | 2025-06-19 10:25:30 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:30.302134 | orchestrator | 2025-06-19 10:25:30 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:30.302533 | orchestrator | 2025-06-19 10:25:30 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:33.346256 | orchestrator | 2025-06-19 10:25:33 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:33.350251 | orchestrator | 2025-06-19 10:25:33 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:33.351093 | orchestrator | 2025-06-19 10:25:33 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:33.351415 | orchestrator | 2025-06-19 10:25:33 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:36.393089 | orchestrator | 2025-06-19 10:25:36 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state STARTED 2025-06-19 10:25:36.394158 | orchestrator | 2025-06-19 10:25:36 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:36.395994 | orchestrator | 2025-06-19 10:25:36 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:36.396329 | orchestrator | 2025-06-19 10:25:36 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:39.445892 | orchestrator | 2025-06-19 10:25:39 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:25:39.449058 | orchestrator | 2025-06-19 10:25:39 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:25:39.454386 | orchestrator | 2025-06-19 10:25:39 | INFO  | Task 9768efd4-0fb5-473d-ae43-ee5da11d7483 is in state SUCCESS 2025-06-19 10:25:39.456237 | orchestrator | 2025-06-19 10:25:39.456277 | orchestrator | 2025-06-19 10:25:39.456289 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-19 10:25:39.456301 | orchestrator | 2025-06-19 10:25:39.456312 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-19 10:25:39.456324 | orchestrator | Thursday 19 June 2025 10:22:48 +0000 (0:00:00.222) 0:00:00.222 ********* 2025-06-19 10:25:39.456336 | orchestrator | ok: [testbed-manager] 2025-06-19 10:25:39.456348 | orchestrator | 2025-06-19 10:25:39.456358 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-19 10:25:39.456441 | orchestrator | Thursday 19 June 2025 10:22:49 +0000 (0:00:00.936) 0:00:01.158 ********* 2025-06-19 10:25:39.456461 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-19 10:25:39.456531 | orchestrator | 2025-06-19 10:25:39.456552 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-19 10:25:39.456573 | orchestrator | Thursday 19 June 2025 10:22:49 +0000 (0:00:00.616) 0:00:01.775 ********* 2025-06-19 10:25:39.456594 | orchestrator | changed: [testbed-manager] 2025-06-19 10:25:39.456609 | orchestrator | 2025-06-19 10:25:39.456620 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-19 10:25:39.456631 | orchestrator | Thursday 19 June 2025 10:22:50 +0000 (0:00:00.933) 0:00:02.708 ********* 2025-06-19 10:25:39.456642 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-19 10:25:39.456690 | orchestrator | ok: [testbed-manager] 2025-06-19 10:25:39.456703 | orchestrator | 2025-06-19 10:25:39.456714 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-19 10:25:39.456725 | orchestrator | Thursday 19 June 2025 10:23:53 +0000 (0:01:02.658) 0:01:05.367 ********* 2025-06-19 10:25:39.456765 | orchestrator | changed: [testbed-manager] 2025-06-19 10:25:39.456776 | orchestrator | 2025-06-19 10:25:39.456891 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:25:39.456918 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:25:39.456939 | orchestrator | 2025-06-19 10:25:39.456951 | orchestrator | 2025-06-19 10:25:39.456961 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:25:39.456972 | orchestrator | Thursday 19 June 2025 10:24:02 +0000 (0:00:09.209) 0:01:14.576 ********* 2025-06-19 10:25:39.456983 | orchestrator | =============================================================================== 2025-06-19 10:25:39.456994 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 62.66s 2025-06-19 10:25:39.457005 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 9.21s 2025-06-19 10:25:39.457016 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.94s 2025-06-19 10:25:39.457026 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.93s 2025-06-19 10:25:39.457037 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.62s 2025-06-19 10:25:39.457048 | orchestrator | 2025-06-19 10:25:39.457059 | orchestrator | 2025-06-19 10:25:39.457070 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-19 10:25:39.457103 | orchestrator | 2025-06-19 10:25:39.457114 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-19 10:25:39.457125 | orchestrator | Thursday 19 June 2025 10:22:20 +0000 (0:00:00.266) 0:00:00.266 ********* 2025-06-19 10:25:39.457136 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:25:39.457148 | orchestrator | 2025-06-19 10:25:39.457159 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-19 10:25:39.457178 | orchestrator | Thursday 19 June 2025 10:22:21 +0000 (0:00:01.414) 0:00:01.681 ********* 2025-06-19 10:25:39.457189 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-19 10:25:39.457200 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-19 10:25:39.457211 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-19 10:25:39.457221 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-19 10:25:39.457233 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-19 10:25:39.457244 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-19 10:25:39.457255 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-19 10:25:39.457266 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-19 10:25:39.457276 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-19 10:25:39.457287 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-19 10:25:39.457298 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-19 10:25:39.457309 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-19 10:25:39.457320 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-19 10:25:39.457331 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-19 10:25:39.457341 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-19 10:25:39.457352 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-19 10:25:39.457378 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-19 10:25:39.457389 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-19 10:25:39.457400 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-19 10:25:39.457411 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-19 10:25:39.457422 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-19 10:25:39.457433 | orchestrator | 2025-06-19 10:25:39.457444 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-19 10:25:39.457454 | orchestrator | Thursday 19 June 2025 10:22:25 +0000 (0:00:03.644) 0:00:05.325 ********* 2025-06-19 10:25:39.457466 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:25:39.457478 | orchestrator | 2025-06-19 10:25:39.457489 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-19 10:25:39.457500 | orchestrator | Thursday 19 June 2025 10:22:26 +0000 (0:00:01.202) 0:00:06.528 ********* 2025-06-19 10:25:39.457516 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.457540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.457553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.457569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.457581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.457592 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.457611 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.457623 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457740 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457833 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.457855 | orchestrator | 2025-06-19 10:25:39.457867 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-19 10:25:39.457877 | orchestrator | Thursday 19 June 2025 10:22:31 +0000 (0:00:04.565) 0:00:11.093 ********* 2025-06-19 10:25:39.457896 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.457909 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.457926 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.457938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.457950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.457961 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:25:39.457977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.457989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.458000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458112 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:25:39.458131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.458153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458179 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:25:39.458190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.458207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.458263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458287 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:25:39.458298 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:25:39.458308 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:25:39.458319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.458331 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458367 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:25:39.458386 | orchestrator | 2025-06-19 10:25:39.458403 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-19 10:25:39.458414 | orchestrator | Thursday 19 June 2025 10:22:32 +0000 (0:00:01.351) 0:00:12.444 ********* 2025-06-19 10:25:39.458432 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.458444 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458469 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458481 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:25:39.458492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.458504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458526 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:25:39.458537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.458553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.458582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.458971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.459063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.459082 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:25:39.459097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.459110 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.459121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.459133 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:25:39.459153 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:25:39.459164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.459198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.459228 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-19 10:25:39.459241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.459252 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:25:39.459264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.459275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.459286 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:25:39.459297 | orchestrator | 2025-06-19 10:25:39.459309 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-19 10:25:39.459321 | orchestrator | Thursday 19 June 2025 10:22:34 +0000 (0:00:02.156) 0:00:14.601 ********* 2025-06-19 10:25:39.459332 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:25:39.459342 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:25:39.459353 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:25:39.459364 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:25:39.459456 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:25:39.459474 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:25:39.459491 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:25:39.459508 | orchestrator | 2025-06-19 10:25:39.459526 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-19 10:25:39.459544 | orchestrator | Thursday 19 June 2025 10:22:35 +0000 (0:00:00.820) 0:00:15.421 ********* 2025-06-19 10:25:39.459564 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:25:39.459577 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:25:39.459602 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:25:39.459614 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:25:39.459626 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:25:39.459638 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:25:39.459685 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:25:39.459700 | orchestrator | 2025-06-19 10:25:39.459718 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-19 10:25:39.459731 | orchestrator | Thursday 19 June 2025 10:22:37 +0000 (0:00:01.387) 0:00:16.809 ********* 2025-06-19 10:25:39.459744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.459758 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.459787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.459801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.459813 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.459827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.459854 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.459873 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.459886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.459899 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.459917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.459929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.459940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.459951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.459974 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.459985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.459997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.460016 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.460028 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.460039 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.460050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.460060 | orchestrator | 2025-06-19 10:25:39.460071 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-19 10:25:39.460082 | orchestrator | Thursday 19 June 2025 10:22:42 +0000 (0:00:05.573) 0:00:22.382 ********* 2025-06-19 10:25:39.460099 | orchestrator | [WARNING]: Skipped 2025-06-19 10:25:39.460111 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-19 10:25:39.460122 | orchestrator | to this access issue: 2025-06-19 10:25:39.460132 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-19 10:25:39.460142 | orchestrator | directory 2025-06-19 10:25:39.460153 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:25:39.460163 | orchestrator | 2025-06-19 10:25:39.460174 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-19 10:25:39.460184 | orchestrator | Thursday 19 June 2025 10:22:44 +0000 (0:00:01.554) 0:00:23.937 ********* 2025-06-19 10:25:39.460195 | orchestrator | [WARNING]: Skipped 2025-06-19 10:25:39.460205 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-19 10:25:39.460216 | orchestrator | to this access issue: 2025-06-19 10:25:39.460226 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-19 10:25:39.460237 | orchestrator | directory 2025-06-19 10:25:39.460248 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:25:39.460258 | orchestrator | 2025-06-19 10:25:39.460269 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-19 10:25:39.460280 | orchestrator | Thursday 19 June 2025 10:22:45 +0000 (0:00:01.237) 0:00:25.174 ********* 2025-06-19 10:25:39.460290 | orchestrator | [WARNING]: Skipped 2025-06-19 10:25:39.460301 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-19 10:25:39.460311 | orchestrator | to this access issue: 2025-06-19 10:25:39.460327 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-19 10:25:39.460337 | orchestrator | directory 2025-06-19 10:25:39.460348 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:25:39.460358 | orchestrator | 2025-06-19 10:25:39.460369 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-19 10:25:39.460379 | orchestrator | Thursday 19 June 2025 10:22:46 +0000 (0:00:01.099) 0:00:26.273 ********* 2025-06-19 10:25:39.460390 | orchestrator | [WARNING]: Skipped 2025-06-19 10:25:39.460400 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-19 10:25:39.460411 | orchestrator | to this access issue: 2025-06-19 10:25:39.460421 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-19 10:25:39.460432 | orchestrator | directory 2025-06-19 10:25:39.460442 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:25:39.460458 | orchestrator | 2025-06-19 10:25:39.460478 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-19 10:25:39.460499 | orchestrator | Thursday 19 June 2025 10:22:47 +0000 (0:00:01.025) 0:00:27.298 ********* 2025-06-19 10:25:39.460518 | orchestrator | changed: [testbed-manager] 2025-06-19 10:25:39.460538 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:25:39.460558 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:25:39.460578 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:25:39.460598 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:25:39.460619 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:25:39.460640 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:25:39.460689 | orchestrator | 2025-06-19 10:25:39.460705 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-19 10:25:39.460723 | orchestrator | Thursday 19 June 2025 10:22:50 +0000 (0:00:03.345) 0:00:30.643 ********* 2025-06-19 10:25:39.460741 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-19 10:25:39.460754 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-19 10:25:39.460765 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-19 10:25:39.460793 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-19 10:25:39.460805 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-19 10:25:39.460815 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-19 10:25:39.460826 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-19 10:25:39.460836 | orchestrator | 2025-06-19 10:25:39.460847 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-19 10:25:39.460857 | orchestrator | Thursday 19 June 2025 10:22:54 +0000 (0:00:03.791) 0:00:34.435 ********* 2025-06-19 10:25:39.460868 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:25:39.460878 | orchestrator | changed: [testbed-manager] 2025-06-19 10:25:39.460889 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:25:39.460899 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:25:39.460909 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:25:39.460920 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:25:39.460930 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:25:39.460940 | orchestrator | 2025-06-19 10:25:39.460951 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-19 10:25:39.460961 | orchestrator | Thursday 19 June 2025 10:22:57 +0000 (0:00:03.053) 0:00:37.488 ********* 2025-06-19 10:25:39.460973 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.460985 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.461002 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.461026 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461055 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461067 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.461089 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461101 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.461123 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.461168 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461180 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461191 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461203 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.461214 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:25:39.461242 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461260 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461271 | orchestrator | 2025-06-19 10:25:39.461282 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-19 10:25:39.461293 | orchestrator | Thursday 19 June 2025 10:23:01 +0000 (0:00:03.429) 0:00:40.918 ********* 2025-06-19 10:25:39.461303 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-19 10:25:39.461314 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-19 10:25:39.461325 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-19 10:25:39.461345 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-19 10:25:39.461356 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-19 10:25:39.461366 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-19 10:25:39.461377 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-19 10:25:39.461387 | orchestrator | 2025-06-19 10:25:39.461398 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-19 10:25:39.461409 | orchestrator | Thursday 19 June 2025 10:23:03 +0000 (0:00:02.448) 0:00:43.366 ********* 2025-06-19 10:25:39.461420 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-19 10:25:39.461431 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-19 10:25:39.461441 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-19 10:25:39.461452 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-19 10:25:39.461462 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-19 10:25:39.461473 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-19 10:25:39.461484 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-19 10:25:39.461494 | orchestrator | 2025-06-19 10:25:39.461505 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-19 10:25:39.461515 | orchestrator | Thursday 19 June 2025 10:23:05 +0000 (0:00:02.391) 0:00:45.758 ********* 2025-06-19 10:25:39.461526 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461600 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461751 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-19 10:25:39.461792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461815 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461876 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:25:39.461896 | orchestrator | 2025-06-19 10:25:39.461922 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-19 10:25:39.461942 | orchestrator | Thursday 19 June 2025 10:23:09 +0000 (0:00:03.640) 0:00:49.398 ********* 2025-06-19 10:25:39.461961 | orchestrator | changed: [testbed-manager] 2025-06-19 10:25:39.461982 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:25:39.462004 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:25:39.462083 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:25:39.462095 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:25:39.462105 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:25:39.462116 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:25:39.462126 | orchestrator | 2025-06-19 10:25:39.462137 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-19 10:25:39.462148 | orchestrator | Thursday 19 June 2025 10:23:11 +0000 (0:00:02.016) 0:00:51.414 ********* 2025-06-19 10:25:39.462159 | orchestrator | changed: [testbed-manager] 2025-06-19 10:25:39.462169 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:25:39.462179 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:25:39.462190 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:25:39.462201 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:25:39.462211 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:25:39.462221 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:25:39.462232 | orchestrator | 2025-06-19 10:25:39.462243 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-19 10:25:39.462254 | orchestrator | Thursday 19 June 2025 10:23:12 +0000 (0:00:01.351) 0:00:52.766 ********* 2025-06-19 10:25:39.462264 | orchestrator | 2025-06-19 10:25:39.462275 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-19 10:25:39.462295 | orchestrator | Thursday 19 June 2025 10:23:13 +0000 (0:00:00.078) 0:00:52.844 ********* 2025-06-19 10:25:39.462314 | orchestrator | 2025-06-19 10:25:39.462334 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-19 10:25:39.462352 | orchestrator | Thursday 19 June 2025 10:23:13 +0000 (0:00:00.061) 0:00:52.905 ********* 2025-06-19 10:25:39.462363 | orchestrator | 2025-06-19 10:25:39.462374 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-19 10:25:39.462384 | orchestrator | Thursday 19 June 2025 10:23:13 +0000 (0:00:00.202) 0:00:53.108 ********* 2025-06-19 10:25:39.462395 | orchestrator | 2025-06-19 10:25:39.462405 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-19 10:25:39.462415 | orchestrator | Thursday 19 June 2025 10:23:13 +0000 (0:00:00.093) 0:00:53.201 ********* 2025-06-19 10:25:39.462425 | orchestrator | 2025-06-19 10:25:39.462436 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-19 10:25:39.462455 | orchestrator | Thursday 19 June 2025 10:23:13 +0000 (0:00:00.078) 0:00:53.279 ********* 2025-06-19 10:25:39.462475 | orchestrator | 2025-06-19 10:25:39.462487 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-19 10:25:39.462497 | orchestrator | Thursday 19 June 2025 10:23:13 +0000 (0:00:00.083) 0:00:53.362 ********* 2025-06-19 10:25:39.462508 | orchestrator | 2025-06-19 10:25:39.462519 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-19 10:25:39.462529 | orchestrator | Thursday 19 June 2025 10:23:13 +0000 (0:00:00.105) 0:00:53.468 ********* 2025-06-19 10:25:39.462540 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:25:39.462551 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:25:39.462563 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:25:39.462582 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:25:39.462600 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:25:39.462619 | orchestrator | changed: [testbed-manager] 2025-06-19 10:25:39.462638 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:25:39.462649 | orchestrator | 2025-06-19 10:25:39.462691 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-19 10:25:39.462711 | orchestrator | Thursday 19 June 2025 10:24:10 +0000 (0:00:56.366) 0:01:49.834 ********* 2025-06-19 10:25:39.462730 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:25:39.462741 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:25:39.462759 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:25:39.462770 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:25:39.462781 | orchestrator | changed: [testbed-manager] 2025-06-19 10:25:39.462791 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:25:39.462802 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:25:39.462812 | orchestrator | 2025-06-19 10:25:39.462823 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-19 10:25:39.462834 | orchestrator | Thursday 19 June 2025 10:25:24 +0000 (0:01:14.616) 0:03:04.451 ********* 2025-06-19 10:25:39.462845 | orchestrator | ok: [testbed-manager] 2025-06-19 10:25:39.462855 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:25:39.462866 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:25:39.462877 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:25:39.462887 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:25:39.462898 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:25:39.462908 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:25:39.462919 | orchestrator | 2025-06-19 10:25:39.462930 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-19 10:25:39.462940 | orchestrator | Thursday 19 June 2025 10:25:26 +0000 (0:00:02.106) 0:03:06.558 ********* 2025-06-19 10:25:39.462951 | orchestrator | changed: [testbed-manager] 2025-06-19 10:25:39.462962 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:25:39.462972 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:25:39.462983 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:25:39.462994 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:25:39.463013 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:25:39.463024 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:25:39.463040 | orchestrator | 2025-06-19 10:25:39.463057 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:25:39.463075 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-19 10:25:39.463094 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-19 10:25:39.463125 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-19 10:25:39.463143 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-19 10:25:39.463162 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-19 10:25:39.463179 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-19 10:25:39.463196 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-19 10:25:39.463214 | orchestrator | 2025-06-19 10:25:39.463233 | orchestrator | 2025-06-19 10:25:39.463251 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:25:39.463269 | orchestrator | Thursday 19 June 2025 10:25:36 +0000 (0:00:10.145) 0:03:16.703 ********* 2025-06-19 10:25:39.463288 | orchestrator | =============================================================================== 2025-06-19 10:25:39.463307 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 74.62s 2025-06-19 10:25:39.463326 | orchestrator | common : Restart fluentd container ------------------------------------- 56.37s 2025-06-19 10:25:39.463338 | orchestrator | common : Restart cron container ---------------------------------------- 10.15s 2025-06-19 10:25:39.463349 | orchestrator | common : Copying over config.json files for services -------------------- 5.57s 2025-06-19 10:25:39.463360 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.57s 2025-06-19 10:25:39.463370 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.79s 2025-06-19 10:25:39.463381 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.64s 2025-06-19 10:25:39.463392 | orchestrator | common : Check common containers ---------------------------------------- 3.64s 2025-06-19 10:25:39.463402 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.43s 2025-06-19 10:25:39.463413 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.35s 2025-06-19 10:25:39.463424 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.05s 2025-06-19 10:25:39.463434 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.45s 2025-06-19 10:25:39.463445 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.39s 2025-06-19 10:25:39.463455 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.16s 2025-06-19 10:25:39.463466 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.11s 2025-06-19 10:25:39.463476 | orchestrator | common : Creating log volume -------------------------------------------- 2.02s 2025-06-19 10:25:39.463487 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.55s 2025-06-19 10:25:39.463497 | orchestrator | common : include_tasks -------------------------------------------------- 1.41s 2025-06-19 10:25:39.463508 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.39s 2025-06-19 10:25:39.463519 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.35s 2025-06-19 10:25:39.463546 | orchestrator | 2025-06-19 10:25:39 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:25:39.463557 | orchestrator | 2025-06-19 10:25:39 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:39.463568 | orchestrator | 2025-06-19 10:25:39 | INFO  | Task 280677cd-7d68-4098-8628-3cbc842afcb1 is in state STARTED 2025-06-19 10:25:39.464690 | orchestrator | 2025-06-19 10:25:39 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:39.464735 | orchestrator | 2025-06-19 10:25:39 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:42.493940 | orchestrator | 2025-06-19 10:25:42 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:25:42.494122 | orchestrator | 2025-06-19 10:25:42 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:25:42.494988 | orchestrator | 2025-06-19 10:25:42 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:25:42.495020 | orchestrator | 2025-06-19 10:25:42 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:42.496262 | orchestrator | 2025-06-19 10:25:42 | INFO  | Task 280677cd-7d68-4098-8628-3cbc842afcb1 is in state STARTED 2025-06-19 10:25:42.498145 | orchestrator | 2025-06-19 10:25:42 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:42.498180 | orchestrator | 2025-06-19 10:25:42 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:45.520798 | orchestrator | 2025-06-19 10:25:45 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:25:45.523158 | orchestrator | 2025-06-19 10:25:45 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:25:45.523545 | orchestrator | 2025-06-19 10:25:45 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:25:45.524225 | orchestrator | 2025-06-19 10:25:45 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:45.527842 | orchestrator | 2025-06-19 10:25:45 | INFO  | Task 280677cd-7d68-4098-8628-3cbc842afcb1 is in state STARTED 2025-06-19 10:25:45.528261 | orchestrator | 2025-06-19 10:25:45 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:45.528283 | orchestrator | 2025-06-19 10:25:45 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:48.551728 | orchestrator | 2025-06-19 10:25:48 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:25:48.551920 | orchestrator | 2025-06-19 10:25:48 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:25:48.552465 | orchestrator | 2025-06-19 10:25:48 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:25:48.553158 | orchestrator | 2025-06-19 10:25:48 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:48.553805 | orchestrator | 2025-06-19 10:25:48 | INFO  | Task 280677cd-7d68-4098-8628-3cbc842afcb1 is in state STARTED 2025-06-19 10:25:48.555115 | orchestrator | 2025-06-19 10:25:48 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:48.555150 | orchestrator | 2025-06-19 10:25:48 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:51.592882 | orchestrator | 2025-06-19 10:25:51 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:25:51.593133 | orchestrator | 2025-06-19 10:25:51 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:25:51.594763 | orchestrator | 2025-06-19 10:25:51 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:25:51.596308 | orchestrator | 2025-06-19 10:25:51 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:51.597083 | orchestrator | 2025-06-19 10:25:51 | INFO  | Task 280677cd-7d68-4098-8628-3cbc842afcb1 is in state STARTED 2025-06-19 10:25:51.599154 | orchestrator | 2025-06-19 10:25:51 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:51.600338 | orchestrator | 2025-06-19 10:25:51 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:54.654343 | orchestrator | 2025-06-19 10:25:54 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:25:54.654538 | orchestrator | 2025-06-19 10:25:54 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:25:54.657003 | orchestrator | 2025-06-19 10:25:54 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:25:54.657768 | orchestrator | 2025-06-19 10:25:54 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:25:54.658436 | orchestrator | 2025-06-19 10:25:54 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:54.659855 | orchestrator | 2025-06-19 10:25:54 | INFO  | Task 280677cd-7d68-4098-8628-3cbc842afcb1 is in state SUCCESS 2025-06-19 10:25:54.662924 | orchestrator | 2025-06-19 10:25:54 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:54.662971 | orchestrator | 2025-06-19 10:25:54 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:25:57.696895 | orchestrator | 2025-06-19 10:25:57 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:25:57.699185 | orchestrator | 2025-06-19 10:25:57 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:25:57.699626 | orchestrator | 2025-06-19 10:25:57 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:25:57.700235 | orchestrator | 2025-06-19 10:25:57 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:25:57.701808 | orchestrator | 2025-06-19 10:25:57 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:25:57.702352 | orchestrator | 2025-06-19 10:25:57 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:25:57.702390 | orchestrator | 2025-06-19 10:25:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:00.732273 | orchestrator | 2025-06-19 10:26:00 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:26:00.733209 | orchestrator | 2025-06-19 10:26:00 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:00.734621 | orchestrator | 2025-06-19 10:26:00 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:00.735199 | orchestrator | 2025-06-19 10:26:00 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:00.737937 | orchestrator | 2025-06-19 10:26:00 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:00.739392 | orchestrator | 2025-06-19 10:26:00 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:26:00.739447 | orchestrator | 2025-06-19 10:26:00 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:03.786596 | orchestrator | 2025-06-19 10:26:03 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:26:03.786976 | orchestrator | 2025-06-19 10:26:03 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:03.789123 | orchestrator | 2025-06-19 10:26:03 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:03.790529 | orchestrator | 2025-06-19 10:26:03 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:03.791474 | orchestrator | 2025-06-19 10:26:03 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:03.795105 | orchestrator | 2025-06-19 10:26:03 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:26:03.795137 | orchestrator | 2025-06-19 10:26:03 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:06.830661 | orchestrator | 2025-06-19 10:26:06 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:26:06.831751 | orchestrator | 2025-06-19 10:26:06 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:06.832776 | orchestrator | 2025-06-19 10:26:06 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:06.833831 | orchestrator | 2025-06-19 10:26:06 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:06.834667 | orchestrator | 2025-06-19 10:26:06 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:06.835629 | orchestrator | 2025-06-19 10:26:06 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:26:06.835654 | orchestrator | 2025-06-19 10:26:06 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:09.886871 | orchestrator | 2025-06-19 10:26:09 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state STARTED 2025-06-19 10:26:09.886990 | orchestrator | 2025-06-19 10:26:09 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:09.887006 | orchestrator | 2025-06-19 10:26:09 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:09.890411 | orchestrator | 2025-06-19 10:26:09 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:09.890438 | orchestrator | 2025-06-19 10:26:09 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:09.892270 | orchestrator | 2025-06-19 10:26:09 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:26:09.892681 | orchestrator | 2025-06-19 10:26:09 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:12.929570 | orchestrator | 2025-06-19 10:26:12 | INFO  | Task f16e4bc7-597d-4bf7-8fad-13035c9ccc10 is in state SUCCESS 2025-06-19 10:26:12.930147 | orchestrator | 2025-06-19 10:26:12.930183 | orchestrator | 2025-06-19 10:26:12.930197 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:26:12.930209 | orchestrator | 2025-06-19 10:26:12.930221 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:26:12.930232 | orchestrator | Thursday 19 June 2025 10:25:44 +0000 (0:00:00.300) 0:00:00.300 ********* 2025-06-19 10:26:12.930243 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:12.930255 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:12.930265 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:12.930276 | orchestrator | 2025-06-19 10:26:12.930287 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:26:12.930298 | orchestrator | Thursday 19 June 2025 10:25:44 +0000 (0:00:00.421) 0:00:00.722 ********* 2025-06-19 10:26:12.930310 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-19 10:26:12.930321 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-19 10:26:12.930357 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-19 10:26:12.930369 | orchestrator | 2025-06-19 10:26:12.930380 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-19 10:26:12.930391 | orchestrator | 2025-06-19 10:26:12.930401 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-19 10:26:12.930412 | orchestrator | Thursday 19 June 2025 10:25:45 +0000 (0:00:00.490) 0:00:01.212 ********* 2025-06-19 10:26:12.930479 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:26:12.930492 | orchestrator | 2025-06-19 10:26:12.930503 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-19 10:26:12.930514 | orchestrator | Thursday 19 June 2025 10:25:45 +0000 (0:00:00.616) 0:00:01.828 ********* 2025-06-19 10:26:12.930525 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-19 10:26:12.930536 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-19 10:26:12.930546 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-19 10:26:12.930558 | orchestrator | 2025-06-19 10:26:12.930569 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-19 10:26:12.930580 | orchestrator | Thursday 19 June 2025 10:25:46 +0000 (0:00:00.798) 0:00:02.627 ********* 2025-06-19 10:26:12.930591 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-19 10:26:12.930601 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-19 10:26:12.930612 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-19 10:26:12.930623 | orchestrator | 2025-06-19 10:26:12.930633 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-19 10:26:12.930644 | orchestrator | Thursday 19 June 2025 10:25:48 +0000 (0:00:01.922) 0:00:04.549 ********* 2025-06-19 10:26:12.930655 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:12.930666 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:12.930676 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:12.930687 | orchestrator | 2025-06-19 10:26:12.930698 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-19 10:26:12.930708 | orchestrator | Thursday 19 June 2025 10:25:50 +0000 (0:00:01.884) 0:00:06.434 ********* 2025-06-19 10:26:12.930719 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:12.930730 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:12.930740 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:12.930751 | orchestrator | 2025-06-19 10:26:12.930762 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:26:12.930773 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:26:12.930785 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:26:12.930796 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:26:12.930807 | orchestrator | 2025-06-19 10:26:12.930818 | orchestrator | 2025-06-19 10:26:12.930828 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:26:12.930839 | orchestrator | Thursday 19 June 2025 10:25:52 +0000 (0:00:02.288) 0:00:08.722 ********* 2025-06-19 10:26:12.930850 | orchestrator | =============================================================================== 2025-06-19 10:26:12.930860 | orchestrator | memcached : Restart memcached container --------------------------------- 2.29s 2025-06-19 10:26:12.930871 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.92s 2025-06-19 10:26:12.930882 | orchestrator | memcached : Check memcached container ----------------------------------- 1.88s 2025-06-19 10:26:12.930892 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.80s 2025-06-19 10:26:12.930903 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.62s 2025-06-19 10:26:12.930924 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-06-19 10:26:12.930934 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.42s 2025-06-19 10:26:12.930945 | orchestrator | 2025-06-19 10:26:12.930955 | orchestrator | 2025-06-19 10:26:12.930966 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:26:12.930977 | orchestrator | 2025-06-19 10:26:12.930987 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:26:12.930998 | orchestrator | Thursday 19 June 2025 10:25:43 +0000 (0:00:00.514) 0:00:00.514 ********* 2025-06-19 10:26:12.931009 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:12.931019 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:12.931030 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:12.931040 | orchestrator | 2025-06-19 10:26:12.931051 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:26:12.931074 | orchestrator | Thursday 19 June 2025 10:25:43 +0000 (0:00:00.404) 0:00:00.918 ********* 2025-06-19 10:26:12.931086 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-19 10:26:12.931097 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-19 10:26:12.931107 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-19 10:26:12.931118 | orchestrator | 2025-06-19 10:26:12.931129 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-19 10:26:12.931139 | orchestrator | 2025-06-19 10:26:12.931150 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-19 10:26:12.931161 | orchestrator | Thursday 19 June 2025 10:25:44 +0000 (0:00:00.605) 0:00:01.524 ********* 2025-06-19 10:26:12.931186 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:26:12.931198 | orchestrator | 2025-06-19 10:26:12.931208 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-19 10:26:12.931219 | orchestrator | Thursday 19 June 2025 10:25:45 +0000 (0:00:00.711) 0:00:02.235 ********* 2025-06-19 10:26:12.931232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931364 | orchestrator | 2025-06-19 10:26:12.931375 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-19 10:26:12.931386 | orchestrator | Thursday 19 June 2025 10:25:46 +0000 (0:00:01.675) 0:00:03.911 ********* 2025-06-19 10:26:12.931397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931520 | orchestrator | 2025-06-19 10:26:12.931531 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-19 10:26:12.931542 | orchestrator | Thursday 19 June 2025 10:25:49 +0000 (0:00:02.835) 0:00:06.747 ********* 2025-06-19 10:26:12.931553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931634 | orchestrator | 2025-06-19 10:26:12.931650 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-19 10:26:12.931661 | orchestrator | Thursday 19 June 2025 10:25:52 +0000 (0:00:02.708) 0:00:09.455 ********* 2025-06-19 10:26:12.931673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-19 10:26:12.931752 | orchestrator | 2025-06-19 10:26:12.931763 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-19 10:26:12.931774 | orchestrator | Thursday 19 June 2025 10:25:54 +0000 (0:00:01.738) 0:00:11.193 ********* 2025-06-19 10:26:12.931785 | orchestrator | 2025-06-19 10:26:12.931796 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-19 10:26:12.931812 | orchestrator | Thursday 19 June 2025 10:25:54 +0000 (0:00:00.144) 0:00:11.338 ********* 2025-06-19 10:26:12.931823 | orchestrator | 2025-06-19 10:26:12.931834 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-19 10:26:12.931845 | orchestrator | Thursday 19 June 2025 10:25:54 +0000 (0:00:00.165) 0:00:11.504 ********* 2025-06-19 10:26:12.931855 | orchestrator | 2025-06-19 10:26:12.931866 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-19 10:26:12.931877 | orchestrator | Thursday 19 June 2025 10:25:54 +0000 (0:00:00.188) 0:00:11.692 ********* 2025-06-19 10:26:12.931887 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:12.931898 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:12.931909 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:12.931919 | orchestrator | 2025-06-19 10:26:12.931930 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-19 10:26:12.931941 | orchestrator | Thursday 19 June 2025 10:26:01 +0000 (0:00:07.038) 0:00:18.731 ********* 2025-06-19 10:26:12.931951 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:12.931962 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:12.931972 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:12.931983 | orchestrator | 2025-06-19 10:26:12.931994 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:26:12.932005 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:26:12.932024 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:26:12.932068 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:26:12.932080 | orchestrator | 2025-06-19 10:26:12.932091 | orchestrator | 2025-06-19 10:26:12.932102 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:26:12.932113 | orchestrator | Thursday 19 June 2025 10:26:12 +0000 (0:00:10.693) 0:00:29.424 ********* 2025-06-19 10:26:12.932123 | orchestrator | =============================================================================== 2025-06-19 10:26:12.932134 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.69s 2025-06-19 10:26:12.932144 | orchestrator | redis : Restart redis container ----------------------------------------- 7.04s 2025-06-19 10:26:12.932155 | orchestrator | redis : Copying over default config.json files -------------------------- 2.84s 2025-06-19 10:26:12.932165 | orchestrator | redis : Copying over redis config files --------------------------------- 2.71s 2025-06-19 10:26:12.932176 | orchestrator | redis : Check redis containers ------------------------------------------ 1.74s 2025-06-19 10:26:12.932186 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.68s 2025-06-19 10:26:12.932197 | orchestrator | redis : include_tasks --------------------------------------------------- 0.71s 2025-06-19 10:26:12.932207 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-06-19 10:26:12.932218 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.50s 2025-06-19 10:26:12.932228 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-06-19 10:26:12.932239 | orchestrator | 2025-06-19 10:26:12 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:12.932349 | orchestrator | 2025-06-19 10:26:12 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:12.932368 | orchestrator | 2025-06-19 10:26:12 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:12.932379 | orchestrator | 2025-06-19 10:26:12 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:12.932920 | orchestrator | 2025-06-19 10:26:12 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:26:12.932942 | orchestrator | 2025-06-19 10:26:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:15.973292 | orchestrator | 2025-06-19 10:26:15 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:15.973397 | orchestrator | 2025-06-19 10:26:15 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:15.974126 | orchestrator | 2025-06-19 10:26:15 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:15.974632 | orchestrator | 2025-06-19 10:26:15 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:15.975356 | orchestrator | 2025-06-19 10:26:15 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:26:15.975392 | orchestrator | 2025-06-19 10:26:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:19.006003 | orchestrator | 2025-06-19 10:26:19 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:19.006173 | orchestrator | 2025-06-19 10:26:19 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:19.006189 | orchestrator | 2025-06-19 10:26:19 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:19.007502 | orchestrator | 2025-06-19 10:26:19 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:19.007562 | orchestrator | 2025-06-19 10:26:19 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:26:19.007575 | orchestrator | 2025-06-19 10:26:19 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:22.042937 | orchestrator | 2025-06-19 10:26:22 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:22.044585 | orchestrator | 2025-06-19 10:26:22 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:22.045055 | orchestrator | 2025-06-19 10:26:22 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:22.046240 | orchestrator | 2025-06-19 10:26:22 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:22.055263 | orchestrator | 2025-06-19 10:26:22 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:26:22.055318 | orchestrator | 2025-06-19 10:26:22 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:25.084313 | orchestrator | 2025-06-19 10:26:25 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:25.084474 | orchestrator | 2025-06-19 10:26:25 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:25.091795 | orchestrator | 2025-06-19 10:26:25 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:25.092188 | orchestrator | 2025-06-19 10:26:25 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:25.092652 | orchestrator | 2025-06-19 10:26:25 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:26:25.092673 | orchestrator | 2025-06-19 10:26:25 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:28.116712 | orchestrator | 2025-06-19 10:26:28 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:28.116969 | orchestrator | 2025-06-19 10:26:28 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:28.116990 | orchestrator | 2025-06-19 10:26:28 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:28.117747 | orchestrator | 2025-06-19 10:26:28 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:28.118262 | orchestrator | 2025-06-19 10:26:28 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state STARTED 2025-06-19 10:26:28.118283 | orchestrator | 2025-06-19 10:26:28 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:31.150781 | orchestrator | 2025-06-19 10:26:31 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:31.155861 | orchestrator | 2025-06-19 10:26:31 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:31.156382 | orchestrator | 2025-06-19 10:26:31 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:31.156900 | orchestrator | 2025-06-19 10:26:31 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:31.158410 | orchestrator | 2025-06-19 10:26:31 | INFO  | Task 0a6703e4-7740-4bd4-bbcc-3ca1667d2081 is in state SUCCESS 2025-06-19 10:26:31.159795 | orchestrator | 2025-06-19 10:26:31.159826 | orchestrator | 2025-06-19 10:26:31.159839 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-19 10:26:31.159851 | orchestrator | 2025-06-19 10:26:31.159862 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-19 10:26:31.159874 | orchestrator | Thursday 19 June 2025 10:22:20 +0000 (0:00:00.189) 0:00:00.189 ********* 2025-06-19 10:26:31.159909 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:26:31.159922 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:26:31.159934 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:26:31.159959 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.159971 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.159982 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.159993 | orchestrator | 2025-06-19 10:26:31.160005 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-19 10:26:31.160016 | orchestrator | Thursday 19 June 2025 10:22:21 +0000 (0:00:00.731) 0:00:00.921 ********* 2025-06-19 10:26:31.160028 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.160040 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.160051 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.160062 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.160073 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.160084 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.160095 | orchestrator | 2025-06-19 10:26:31.160107 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-19 10:26:31.160118 | orchestrator | Thursday 19 June 2025 10:22:22 +0000 (0:00:00.620) 0:00:01.542 ********* 2025-06-19 10:26:31.160130 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.160141 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.160152 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.160163 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.160174 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.160185 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.160196 | orchestrator | 2025-06-19 10:26:31.160207 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-19 10:26:31.160219 | orchestrator | Thursday 19 June 2025 10:22:23 +0000 (0:00:00.776) 0:00:02.318 ********* 2025-06-19 10:26:31.160230 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:26:31.160241 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:26:31.160252 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:26:31.160263 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.160275 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.160286 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.160297 | orchestrator | 2025-06-19 10:26:31.160308 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-19 10:26:31.160319 | orchestrator | Thursday 19 June 2025 10:22:25 +0000 (0:00:02.017) 0:00:04.335 ********* 2025-06-19 10:26:31.160350 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:26:31.160361 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:26:31.160372 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.160382 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.160395 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.160407 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:26:31.160420 | orchestrator | 2025-06-19 10:26:31.160432 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-19 10:26:31.160444 | orchestrator | Thursday 19 June 2025 10:22:26 +0000 (0:00:01.446) 0:00:05.782 ********* 2025-06-19 10:26:31.160456 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:26:31.160467 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:26:31.160479 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:26:31.160491 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.160503 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.160514 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.160526 | orchestrator | 2025-06-19 10:26:31.160538 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-19 10:26:31.160550 | orchestrator | Thursday 19 June 2025 10:22:28 +0000 (0:00:02.366) 0:00:08.149 ********* 2025-06-19 10:26:31.160562 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.160574 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.160586 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.160607 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.160619 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.160630 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.160643 | orchestrator | 2025-06-19 10:26:31.160654 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-19 10:26:31.160666 | orchestrator | Thursday 19 June 2025 10:22:29 +0000 (0:00:00.649) 0:00:08.799 ********* 2025-06-19 10:26:31.160678 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.160690 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.160701 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.160713 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.160725 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.160737 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.160748 | orchestrator | 2025-06-19 10:26:31.160759 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-19 10:26:31.160769 | orchestrator | Thursday 19 June 2025 10:22:30 +0000 (0:00:00.957) 0:00:09.756 ********* 2025-06-19 10:26:31.160781 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-19 10:26:31.160792 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-19 10:26:31.160802 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.160813 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-19 10:26:31.160824 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-19 10:26:31.160834 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.160844 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-19 10:26:31.160855 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-19 10:26:31.160865 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.160876 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-19 10:26:31.160898 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-19 10:26:31.160910 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.160920 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-19 10:26:31.160931 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-19 10:26:31.160941 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.160952 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-19 10:26:31.160968 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-19 10:26:31.160979 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.160989 | orchestrator | 2025-06-19 10:26:31.161000 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-19 10:26:31.161010 | orchestrator | Thursday 19 June 2025 10:22:31 +0000 (0:00:00.744) 0:00:10.501 ********* 2025-06-19 10:26:31.161021 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.161032 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.161042 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.161053 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.161063 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.161074 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.161084 | orchestrator | 2025-06-19 10:26:31.161095 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-19 10:26:31.161106 | orchestrator | Thursday 19 June 2025 10:22:32 +0000 (0:00:01.351) 0:00:11.853 ********* 2025-06-19 10:26:31.161117 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:26:31.161127 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:26:31.161138 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:26:31.161148 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.161159 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.161176 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.161187 | orchestrator | 2025-06-19 10:26:31.161197 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-19 10:26:31.161208 | orchestrator | Thursday 19 June 2025 10:22:33 +0000 (0:00:00.876) 0:00:12.729 ********* 2025-06-19 10:26:31.161219 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:26:31.161229 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.161240 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:26:31.161250 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:26:31.161261 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.161271 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.161282 | orchestrator | 2025-06-19 10:26:31.161292 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-19 10:26:31.161303 | orchestrator | Thursday 19 June 2025 10:22:39 +0000 (0:00:05.604) 0:00:18.333 ********* 2025-06-19 10:26:31.161314 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.161324 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.161428 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.161439 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.161450 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.161460 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.161471 | orchestrator | 2025-06-19 10:26:31.161482 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-19 10:26:31.161492 | orchestrator | Thursday 19 June 2025 10:22:40 +0000 (0:00:01.137) 0:00:19.471 ********* 2025-06-19 10:26:31.161503 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.161514 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.161524 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.161535 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.161545 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.161555 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.161566 | orchestrator | 2025-06-19 10:26:31.161577 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-19 10:26:31.161589 | orchestrator | Thursday 19 June 2025 10:22:41 +0000 (0:00:01.374) 0:00:20.845 ********* 2025-06-19 10:26:31.161599 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.161610 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.161620 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.161631 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.161641 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.161652 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.161662 | orchestrator | 2025-06-19 10:26:31.161673 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-19 10:26:31.161683 | orchestrator | Thursday 19 June 2025 10:22:42 +0000 (0:00:00.660) 0:00:21.506 ********* 2025-06-19 10:26:31.161694 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-19 10:26:31.161705 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-19 10:26:31.161715 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.161726 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-19 10:26:31.161737 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-19 10:26:31.161747 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.161757 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-19 10:26:31.161768 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-19 10:26:31.161778 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.161789 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-19 10:26:31.161800 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-19 10:26:31.161810 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.161821 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-19 10:26:31.161832 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-19 10:26:31.161849 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.161860 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-19 10:26:31.161871 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-19 10:26:31.161881 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.161892 | orchestrator | 2025-06-19 10:26:31.161902 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-19 10:26:31.161920 | orchestrator | Thursday 19 June 2025 10:22:43 +0000 (0:00:00.882) 0:00:22.389 ********* 2025-06-19 10:26:31.161931 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.161941 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.161952 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.161962 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.161972 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.161983 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.161993 | orchestrator | 2025-06-19 10:26:31.162004 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-19 10:26:31.162014 | orchestrator | 2025-06-19 10:26:31.162087 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-19 10:26:31.162099 | orchestrator | Thursday 19 June 2025 10:22:44 +0000 (0:00:01.796) 0:00:24.185 ********* 2025-06-19 10:26:31.162110 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.162121 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.162131 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.162142 | orchestrator | 2025-06-19 10:26:31.162153 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-19 10:26:31.162163 | orchestrator | Thursday 19 June 2025 10:22:46 +0000 (0:00:01.855) 0:00:26.041 ********* 2025-06-19 10:26:31.162174 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.162185 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.162195 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.162206 | orchestrator | 2025-06-19 10:26:31.162217 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-19 10:26:31.162227 | orchestrator | Thursday 19 June 2025 10:22:47 +0000 (0:00:01.147) 0:00:27.188 ********* 2025-06-19 10:26:31.162238 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.162249 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.162260 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.162270 | orchestrator | 2025-06-19 10:26:31.162281 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-19 10:26:31.162292 | orchestrator | Thursday 19 June 2025 10:22:49 +0000 (0:00:01.176) 0:00:28.365 ********* 2025-06-19 10:26:31.162302 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.162313 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.162324 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.162390 | orchestrator | 2025-06-19 10:26:31.162402 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-19 10:26:31.162413 | orchestrator | Thursday 19 June 2025 10:22:50 +0000 (0:00:00.907) 0:00:29.273 ********* 2025-06-19 10:26:31.162424 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.162434 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.162445 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.162455 | orchestrator | 2025-06-19 10:26:31.162466 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-19 10:26:31.162477 | orchestrator | Thursday 19 June 2025 10:22:50 +0000 (0:00:00.259) 0:00:29.532 ********* 2025-06-19 10:26:31.162488 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:26:31.162499 | orchestrator | 2025-06-19 10:26:31.162510 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-19 10:26:31.162520 | orchestrator | Thursday 19 June 2025 10:22:50 +0000 (0:00:00.450) 0:00:29.983 ********* 2025-06-19 10:26:31.162531 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.162542 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.162561 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.162572 | orchestrator | 2025-06-19 10:26:31.162582 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-19 10:26:31.162593 | orchestrator | Thursday 19 June 2025 10:22:53 +0000 (0:00:02.788) 0:00:32.771 ********* 2025-06-19 10:26:31.162604 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.162614 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.162625 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.162635 | orchestrator | 2025-06-19 10:26:31.162646 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-19 10:26:31.162657 | orchestrator | Thursday 19 June 2025 10:22:54 +0000 (0:00:00.633) 0:00:33.405 ********* 2025-06-19 10:26:31.162668 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.162678 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.162689 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.162699 | orchestrator | 2025-06-19 10:26:31.162710 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-19 10:26:31.162720 | orchestrator | Thursday 19 June 2025 10:22:55 +0000 (0:00:01.150) 0:00:34.556 ********* 2025-06-19 10:26:31.162730 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.162740 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.162749 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.162758 | orchestrator | 2025-06-19 10:26:31.162768 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-19 10:26:31.162778 | orchestrator | Thursday 19 June 2025 10:22:57 +0000 (0:00:02.164) 0:00:36.720 ********* 2025-06-19 10:26:31.162787 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.162796 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.162806 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.162815 | orchestrator | 2025-06-19 10:26:31.162825 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-19 10:26:31.162834 | orchestrator | Thursday 19 June 2025 10:22:58 +0000 (0:00:00.702) 0:00:37.422 ********* 2025-06-19 10:26:31.162844 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.162854 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.162863 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.162872 | orchestrator | 2025-06-19 10:26:31.162882 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-19 10:26:31.162891 | orchestrator | Thursday 19 June 2025 10:22:58 +0000 (0:00:00.487) 0:00:37.909 ********* 2025-06-19 10:26:31.162901 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.162910 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.162919 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.162929 | orchestrator | 2025-06-19 10:26:31.162938 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-19 10:26:31.162948 | orchestrator | Thursday 19 June 2025 10:23:00 +0000 (0:00:02.278) 0:00:40.188 ********* 2025-06-19 10:26:31.162965 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-19 10:26:31.162975 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-19 10:26:31.162990 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-19 10:26:31.163000 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-19 10:26:31.163009 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-19 10:26:31.163019 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-19 10:26:31.163034 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-19 10:26:31.163044 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-19 10:26:31.163054 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-19 10:26:31.163063 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-19 10:26:31.163072 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-19 10:26:31.163082 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-19 10:26:31.163091 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.163101 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.163110 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.163120 | orchestrator | 2025-06-19 10:26:31.163129 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-19 10:26:31.163139 | orchestrator | Thursday 19 June 2025 10:23:45 +0000 (0:00:44.753) 0:01:24.942 ********* 2025-06-19 10:26:31.163148 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.163158 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.163167 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.163177 | orchestrator | 2025-06-19 10:26:31.163186 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-19 10:26:31.163196 | orchestrator | Thursday 19 June 2025 10:23:46 +0000 (0:00:00.495) 0:01:25.438 ********* 2025-06-19 10:26:31.163205 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.163215 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.163224 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.163233 | orchestrator | 2025-06-19 10:26:31.163243 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-19 10:26:31.163252 | orchestrator | Thursday 19 June 2025 10:23:47 +0000 (0:00:00.943) 0:01:26.381 ********* 2025-06-19 10:26:31.163262 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.163271 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.163280 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.163290 | orchestrator | 2025-06-19 10:26:31.163299 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-19 10:26:31.163309 | orchestrator | Thursday 19 June 2025 10:23:48 +0000 (0:00:01.170) 0:01:27.552 ********* 2025-06-19 10:26:31.163318 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.163351 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.163368 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.163385 | orchestrator | 2025-06-19 10:26:31.163403 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-19 10:26:31.163420 | orchestrator | Thursday 19 June 2025 10:24:03 +0000 (0:00:14.930) 0:01:42.483 ********* 2025-06-19 10:26:31.163435 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.163445 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.163454 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.163463 | orchestrator | 2025-06-19 10:26:31.163473 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-19 10:26:31.163483 | orchestrator | Thursday 19 June 2025 10:24:04 +0000 (0:00:00.813) 0:01:43.296 ********* 2025-06-19 10:26:31.163492 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.163501 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.163511 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.163520 | orchestrator | 2025-06-19 10:26:31.163529 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-19 10:26:31.163541 | orchestrator | Thursday 19 June 2025 10:24:04 +0000 (0:00:00.640) 0:01:43.937 ********* 2025-06-19 10:26:31.163567 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.163582 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.163592 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.163601 | orchestrator | 2025-06-19 10:26:31.163610 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-19 10:26:31.163620 | orchestrator | Thursday 19 June 2025 10:24:05 +0000 (0:00:00.620) 0:01:44.558 ********* 2025-06-19 10:26:31.163629 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.163638 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.163648 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.163657 | orchestrator | 2025-06-19 10:26:31.163666 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-19 10:26:31.163676 | orchestrator | Thursday 19 June 2025 10:24:05 +0000 (0:00:00.535) 0:01:45.094 ********* 2025-06-19 10:26:31.163692 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.163702 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.163711 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.163721 | orchestrator | 2025-06-19 10:26:31.163730 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-19 10:26:31.163740 | orchestrator | Thursday 19 June 2025 10:24:06 +0000 (0:00:00.533) 0:01:45.627 ********* 2025-06-19 10:26:31.163749 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.163759 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.163768 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.163778 | orchestrator | 2025-06-19 10:26:31.163788 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-19 10:26:31.163797 | orchestrator | Thursday 19 June 2025 10:24:07 +0000 (0:00:00.587) 0:01:46.215 ********* 2025-06-19 10:26:31.163807 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.163816 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.163825 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.163835 | orchestrator | 2025-06-19 10:26:31.163844 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-19 10:26:31.163854 | orchestrator | Thursday 19 June 2025 10:24:07 +0000 (0:00:00.607) 0:01:46.822 ********* 2025-06-19 10:26:31.163863 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.163872 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.163882 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.163891 | orchestrator | 2025-06-19 10:26:31.163900 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-19 10:26:31.163910 | orchestrator | Thursday 19 June 2025 10:24:08 +0000 (0:00:00.941) 0:01:47.764 ********* 2025-06-19 10:26:31.163919 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:31.163929 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:31.163938 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:31.163947 | orchestrator | 2025-06-19 10:26:31.163957 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-19 10:26:31.163966 | orchestrator | Thursday 19 June 2025 10:24:09 +0000 (0:00:01.221) 0:01:48.986 ********* 2025-06-19 10:26:31.163976 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.163985 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.163994 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.164003 | orchestrator | 2025-06-19 10:26:31.164013 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-19 10:26:31.164022 | orchestrator | Thursday 19 June 2025 10:24:10 +0000 (0:00:00.358) 0:01:49.344 ********* 2025-06-19 10:26:31.164032 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.164041 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.164050 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.164060 | orchestrator | 2025-06-19 10:26:31.164069 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-19 10:26:31.164079 | orchestrator | Thursday 19 June 2025 10:24:10 +0000 (0:00:00.372) 0:01:49.717 ********* 2025-06-19 10:26:31.164088 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.164103 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.164113 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.164122 | orchestrator | 2025-06-19 10:26:31.164132 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-19 10:26:31.164141 | orchestrator | Thursday 19 June 2025 10:24:11 +0000 (0:00:00.848) 0:01:50.566 ********* 2025-06-19 10:26:31.164151 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.164168 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.164178 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.164187 | orchestrator | 2025-06-19 10:26:31.164197 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-19 10:26:31.164206 | orchestrator | Thursday 19 June 2025 10:24:12 +0000 (0:00:01.100) 0:01:51.666 ********* 2025-06-19 10:26:31.164216 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-19 10:26:31.164225 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-19 10:26:31.164235 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-19 10:26:31.164244 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-19 10:26:31.164254 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-19 10:26:31.164263 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-19 10:26:31.164273 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-19 10:26:31.164282 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-19 10:26:31.164292 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-19 10:26:31.164301 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-19 10:26:31.164310 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-19 10:26:31.164320 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-19 10:26:31.164356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-19 10:26:31.164374 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-19 10:26:31.164391 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-19 10:26:31.164407 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-19 10:26:31.164425 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-19 10:26:31.164437 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-19 10:26:31.164453 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-19 10:26:31.164477 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-19 10:26:31.164489 | orchestrator | 2025-06-19 10:26:31.164499 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-19 10:26:31.164508 | orchestrator | 2025-06-19 10:26:31.164517 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-19 10:26:31.164527 | orchestrator | Thursday 19 June 2025 10:24:15 +0000 (0:00:03.186) 0:01:54.852 ********* 2025-06-19 10:26:31.164536 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:26:31.164546 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:26:31.164558 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:26:31.164574 | orchestrator | 2025-06-19 10:26:31.164591 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-19 10:26:31.164622 | orchestrator | Thursday 19 June 2025 10:24:15 +0000 (0:00:00.310) 0:01:55.163 ********* 2025-06-19 10:26:31.164633 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:26:31.164642 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:26:31.164652 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:26:31.164661 | orchestrator | 2025-06-19 10:26:31.164671 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-19 10:26:31.164680 | orchestrator | Thursday 19 June 2025 10:24:16 +0000 (0:00:00.781) 0:01:55.944 ********* 2025-06-19 10:26:31.164690 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:26:31.164699 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:26:31.164708 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:26:31.164718 | orchestrator | 2025-06-19 10:26:31.164727 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-19 10:26:31.164736 | orchestrator | Thursday 19 June 2025 10:24:17 +0000 (0:00:00.315) 0:01:56.259 ********* 2025-06-19 10:26:31.164746 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:26:31.164755 | orchestrator | 2025-06-19 10:26:31.164765 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-19 10:26:31.164774 | orchestrator | Thursday 19 June 2025 10:24:17 +0000 (0:00:00.466) 0:01:56.726 ********* 2025-06-19 10:26:31.164783 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.164793 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.164802 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.164811 | orchestrator | 2025-06-19 10:26:31.164821 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-19 10:26:31.164830 | orchestrator | Thursday 19 June 2025 10:24:18 +0000 (0:00:00.484) 0:01:57.211 ********* 2025-06-19 10:26:31.164840 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.164849 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.164859 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.164868 | orchestrator | 2025-06-19 10:26:31.164877 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-19 10:26:31.164887 | orchestrator | Thursday 19 June 2025 10:24:18 +0000 (0:00:00.327) 0:01:57.538 ********* 2025-06-19 10:26:31.164896 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.164905 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.164914 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.164924 | orchestrator | 2025-06-19 10:26:31.164933 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-19 10:26:31.164943 | orchestrator | Thursday 19 June 2025 10:24:18 +0000 (0:00:00.308) 0:01:57.847 ********* 2025-06-19 10:26:31.164952 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:26:31.164961 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:26:31.164970 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:26:31.164980 | orchestrator | 2025-06-19 10:26:31.164989 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-19 10:26:31.164999 | orchestrator | Thursday 19 June 2025 10:24:19 +0000 (0:00:01.315) 0:01:59.163 ********* 2025-06-19 10:26:31.165008 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:26:31.165017 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:26:31.165027 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:26:31.165036 | orchestrator | 2025-06-19 10:26:31.165046 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-19 10:26:31.165059 | orchestrator | 2025-06-19 10:26:31.165075 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-19 10:26:31.165090 | orchestrator | Thursday 19 June 2025 10:24:29 +0000 (0:00:09.595) 0:02:08.759 ********* 2025-06-19 10:26:31.165104 | orchestrator | ok: [testbed-manager] 2025-06-19 10:26:31.165118 | orchestrator | 2025-06-19 10:26:31.165133 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-19 10:26:31.165148 | orchestrator | Thursday 19 June 2025 10:24:30 +0000 (0:00:00.836) 0:02:09.595 ********* 2025-06-19 10:26:31.165172 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:31.165187 | orchestrator | 2025-06-19 10:26:31.165202 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-19 10:26:31.165217 | orchestrator | Thursday 19 June 2025 10:24:30 +0000 (0:00:00.447) 0:02:10.042 ********* 2025-06-19 10:26:31.165234 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-19 10:26:31.165251 | orchestrator | 2025-06-19 10:26:31.165267 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-19 10:26:31.165281 | orchestrator | Thursday 19 June 2025 10:24:31 +0000 (0:00:00.550) 0:02:10.593 ********* 2025-06-19 10:26:31.165291 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:31.165300 | orchestrator | 2025-06-19 10:26:31.165309 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-19 10:26:31.165319 | orchestrator | Thursday 19 June 2025 10:24:32 +0000 (0:00:00.886) 0:02:11.479 ********* 2025-06-19 10:26:31.165350 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:31.165361 | orchestrator | 2025-06-19 10:26:31.165370 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-19 10:26:31.165388 | orchestrator | Thursday 19 June 2025 10:24:32 +0000 (0:00:00.694) 0:02:12.174 ********* 2025-06-19 10:26:31.165398 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-19 10:26:31.165407 | orchestrator | 2025-06-19 10:26:31.165416 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-19 10:26:31.165426 | orchestrator | Thursday 19 June 2025 10:24:35 +0000 (0:00:02.776) 0:02:14.950 ********* 2025-06-19 10:26:31.165435 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-19 10:26:31.165444 | orchestrator | 2025-06-19 10:26:31.165460 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-19 10:26:31.165469 | orchestrator | Thursday 19 June 2025 10:24:36 +0000 (0:00:01.117) 0:02:16.068 ********* 2025-06-19 10:26:31.165479 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:31.165488 | orchestrator | 2025-06-19 10:26:31.165497 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-19 10:26:31.165507 | orchestrator | Thursday 19 June 2025 10:24:37 +0000 (0:00:00.424) 0:02:16.492 ********* 2025-06-19 10:26:31.165516 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:31.165525 | orchestrator | 2025-06-19 10:26:31.165535 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-19 10:26:31.165544 | orchestrator | 2025-06-19 10:26:31.165553 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-19 10:26:31.165563 | orchestrator | Thursday 19 June 2025 10:24:37 +0000 (0:00:00.468) 0:02:16.961 ********* 2025-06-19 10:26:31.165572 | orchestrator | ok: [testbed-manager] 2025-06-19 10:26:31.165582 | orchestrator | 2025-06-19 10:26:31.165591 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-19 10:26:31.165601 | orchestrator | Thursday 19 June 2025 10:24:37 +0000 (0:00:00.195) 0:02:17.156 ********* 2025-06-19 10:26:31.165610 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-19 10:26:31.165619 | orchestrator | 2025-06-19 10:26:31.165629 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-19 10:26:31.165638 | orchestrator | Thursday 19 June 2025 10:24:38 +0000 (0:00:00.241) 0:02:17.398 ********* 2025-06-19 10:26:31.165647 | orchestrator | ok: [testbed-manager] 2025-06-19 10:26:31.165657 | orchestrator | 2025-06-19 10:26:31.165666 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-19 10:26:31.165675 | orchestrator | Thursday 19 June 2025 10:24:39 +0000 (0:00:00.830) 0:02:18.228 ********* 2025-06-19 10:26:31.165685 | orchestrator | ok: [testbed-manager] 2025-06-19 10:26:31.165694 | orchestrator | 2025-06-19 10:26:31.165703 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-19 10:26:31.165713 | orchestrator | Thursday 19 June 2025 10:24:41 +0000 (0:00:02.068) 0:02:20.296 ********* 2025-06-19 10:26:31.165722 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:31.165741 | orchestrator | 2025-06-19 10:26:31.165751 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-19 10:26:31.165761 | orchestrator | Thursday 19 June 2025 10:24:42 +0000 (0:00:01.074) 0:02:21.370 ********* 2025-06-19 10:26:31.165770 | orchestrator | ok: [testbed-manager] 2025-06-19 10:26:31.165779 | orchestrator | 2025-06-19 10:26:31.165789 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-19 10:26:31.165798 | orchestrator | Thursday 19 June 2025 10:24:42 +0000 (0:00:00.515) 0:02:21.885 ********* 2025-06-19 10:26:31.165807 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:31.165817 | orchestrator | 2025-06-19 10:26:31.165826 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-19 10:26:31.165836 | orchestrator | Thursday 19 June 2025 10:24:50 +0000 (0:00:08.104) 0:02:29.989 ********* 2025-06-19 10:26:31.165845 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:31.165855 | orchestrator | 2025-06-19 10:26:31.165864 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-19 10:26:31.165873 | orchestrator | Thursday 19 June 2025 10:25:06 +0000 (0:00:15.794) 0:02:45.784 ********* 2025-06-19 10:26:31.165883 | orchestrator | ok: [testbed-manager] 2025-06-19 10:26:31.165892 | orchestrator | 2025-06-19 10:26:31.165901 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-19 10:26:31.165911 | orchestrator | 2025-06-19 10:26:31.165920 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-19 10:26:31.165929 | orchestrator | Thursday 19 June 2025 10:25:07 +0000 (0:00:00.500) 0:02:46.285 ********* 2025-06-19 10:26:31.165939 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.165948 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.165957 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.165967 | orchestrator | 2025-06-19 10:26:31.165976 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-19 10:26:31.165985 | orchestrator | Thursday 19 June 2025 10:25:07 +0000 (0:00:00.291) 0:02:46.576 ********* 2025-06-19 10:26:31.165994 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.166004 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.166013 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.166053 | orchestrator | 2025-06-19 10:26:31.166063 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-19 10:26:31.166072 | orchestrator | Thursday 19 June 2025 10:25:07 +0000 (0:00:00.517) 0:02:47.093 ********* 2025-06-19 10:26:31.166082 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:26:31.166091 | orchestrator | 2025-06-19 10:26:31.166101 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-19 10:26:31.166110 | orchestrator | Thursday 19 June 2025 10:25:08 +0000 (0:00:00.584) 0:02:47.678 ********* 2025-06-19 10:26:31.166120 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-19 10:26:31.166129 | orchestrator | 2025-06-19 10:26:31.166139 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-19 10:26:31.166148 | orchestrator | Thursday 19 June 2025 10:25:09 +0000 (0:00:00.898) 0:02:48.576 ********* 2025-06-19 10:26:31.166157 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:26:31.166167 | orchestrator | 2025-06-19 10:26:31.166176 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-19 10:26:31.166191 | orchestrator | Thursday 19 June 2025 10:25:10 +0000 (0:00:00.892) 0:02:49.469 ********* 2025-06-19 10:26:31.166200 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.166210 | orchestrator | 2025-06-19 10:26:31.166219 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-19 10:26:31.166229 | orchestrator | Thursday 19 June 2025 10:25:10 +0000 (0:00:00.193) 0:02:49.663 ********* 2025-06-19 10:26:31.166238 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:26:31.166248 | orchestrator | 2025-06-19 10:26:31.166262 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-19 10:26:31.166278 | orchestrator | Thursday 19 June 2025 10:25:11 +0000 (0:00:01.024) 0:02:50.687 ********* 2025-06-19 10:26:31.166288 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.166297 | orchestrator | 2025-06-19 10:26:31.166306 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-19 10:26:31.166316 | orchestrator | Thursday 19 June 2025 10:25:11 +0000 (0:00:00.204) 0:02:50.892 ********* 2025-06-19 10:26:31.166379 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.166392 | orchestrator | 2025-06-19 10:26:31.166402 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-19 10:26:31.166411 | orchestrator | Thursday 19 June 2025 10:25:12 +0000 (0:00:00.591) 0:02:51.483 ********* 2025-06-19 10:26:31.166421 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.166430 | orchestrator | 2025-06-19 10:26:31.166440 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-19 10:26:31.166456 | orchestrator | Thursday 19 June 2025 10:25:12 +0000 (0:00:00.209) 0:02:51.693 ********* 2025-06-19 10:26:31.166469 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.166478 | orchestrator | 2025-06-19 10:26:31.166488 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-19 10:26:31.166497 | orchestrator | Thursday 19 June 2025 10:25:12 +0000 (0:00:00.218) 0:02:51.911 ********* 2025-06-19 10:26:31.166507 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-19 10:26:31.166516 | orchestrator | 2025-06-19 10:26:31.166526 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-19 10:26:31.166535 | orchestrator | Thursday 19 June 2025 10:25:19 +0000 (0:00:06.451) 0:02:58.363 ********* 2025-06-19 10:26:31.166544 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-19 10:26:31.166554 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-19 10:26:31.166564 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-19 10:26:31.166573 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-19 10:26:31.166582 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-19 10:26:31.166592 | orchestrator | 2025-06-19 10:26:31.166601 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-19 10:26:31.166610 | orchestrator | Thursday 19 June 2025 10:26:02 +0000 (0:00:43.093) 0:03:41.457 ********* 2025-06-19 10:26:31.166620 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:26:31.166629 | orchestrator | 2025-06-19 10:26:31.166639 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-19 10:26:31.166648 | orchestrator | Thursday 19 June 2025 10:26:03 +0000 (0:00:01.227) 0:03:42.684 ********* 2025-06-19 10:26:31.166657 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-19 10:26:31.166667 | orchestrator | 2025-06-19 10:26:31.166676 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-19 10:26:31.166685 | orchestrator | Thursday 19 June 2025 10:26:05 +0000 (0:00:01.742) 0:03:44.426 ********* 2025-06-19 10:26:31.166695 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-19 10:26:31.166704 | orchestrator | 2025-06-19 10:26:31.166714 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-19 10:26:31.166723 | orchestrator | Thursday 19 June 2025 10:26:06 +0000 (0:00:01.084) 0:03:45.510 ********* 2025-06-19 10:26:31.166732 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.166742 | orchestrator | 2025-06-19 10:26:31.166751 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-19 10:26:31.166761 | orchestrator | Thursday 19 June 2025 10:26:06 +0000 (0:00:00.184) 0:03:45.695 ********* 2025-06-19 10:26:31.166770 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-19 10:26:31.166780 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-19 10:26:31.166800 | orchestrator | 2025-06-19 10:26:31.166810 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-19 10:26:31.166820 | orchestrator | Thursday 19 June 2025 10:26:08 +0000 (0:00:02.065) 0:03:47.760 ********* 2025-06-19 10:26:31.166829 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.166839 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.166846 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.166854 | orchestrator | 2025-06-19 10:26:31.166862 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-19 10:26:31.166869 | orchestrator | Thursday 19 June 2025 10:26:09 +0000 (0:00:00.597) 0:03:48.358 ********* 2025-06-19 10:26:31.166877 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.166890 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.166900 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.166907 | orchestrator | 2025-06-19 10:26:31.166915 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-19 10:26:31.166923 | orchestrator | 2025-06-19 10:26:31.166931 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-19 10:26:31.166938 | orchestrator | Thursday 19 June 2025 10:26:10 +0000 (0:00:00.905) 0:03:49.263 ********* 2025-06-19 10:26:31.166946 | orchestrator | ok: [testbed-manager] 2025-06-19 10:26:31.166953 | orchestrator | 2025-06-19 10:26:31.166961 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-19 10:26:31.166969 | orchestrator | Thursday 19 June 2025 10:26:10 +0000 (0:00:00.122) 0:03:49.386 ********* 2025-06-19 10:26:31.166982 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-19 10:26:31.166990 | orchestrator | 2025-06-19 10:26:31.166998 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-19 10:26:31.167005 | orchestrator | Thursday 19 June 2025 10:26:10 +0000 (0:00:00.202) 0:03:49.588 ********* 2025-06-19 10:26:31.167013 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:31.167021 | orchestrator | 2025-06-19 10:26:31.167028 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-19 10:26:31.167040 | orchestrator | 2025-06-19 10:26:31.167048 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-19 10:26:31.167056 | orchestrator | Thursday 19 June 2025 10:26:15 +0000 (0:00:05.429) 0:03:55.018 ********* 2025-06-19 10:26:31.167063 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:26:31.167071 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:26:31.167079 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:26:31.167087 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:31.167094 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:31.167102 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:31.167109 | orchestrator | 2025-06-19 10:26:31.167117 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-19 10:26:31.167124 | orchestrator | Thursday 19 June 2025 10:26:16 +0000 (0:00:00.614) 0:03:55.632 ********* 2025-06-19 10:26:31.167132 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-19 10:26:31.167140 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-19 10:26:31.167148 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-19 10:26:31.167156 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-19 10:26:31.167163 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-19 10:26:31.167171 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-19 10:26:31.167179 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-19 10:26:31.167186 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-19 10:26:31.167200 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-19 10:26:31.167207 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-19 10:26:31.167215 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-19 10:26:31.167222 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-19 10:26:31.167230 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-19 10:26:31.167238 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-19 10:26:31.167246 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-19 10:26:31.167253 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-19 10:26:31.167261 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-19 10:26:31.167269 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-19 10:26:31.167276 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-19 10:26:31.167284 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-19 10:26:31.167291 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-19 10:26:31.167299 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-19 10:26:31.167307 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-19 10:26:31.167315 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-19 10:26:31.167322 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-19 10:26:31.167349 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-19 10:26:31.167357 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-19 10:26:31.167364 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-19 10:26:31.167372 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-19 10:26:31.167380 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-19 10:26:31.167387 | orchestrator | 2025-06-19 10:26:31.167395 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-19 10:26:31.167402 | orchestrator | Thursday 19 June 2025 10:26:29 +0000 (0:00:13.180) 0:04:08.813 ********* 2025-06-19 10:26:31.167410 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.167418 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.167425 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.167433 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.167440 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.167448 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.167456 | orchestrator | 2025-06-19 10:26:31.167463 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-19 10:26:31.167476 | orchestrator | Thursday 19 June 2025 10:26:30 +0000 (0:00:00.444) 0:04:09.258 ********* 2025-06-19 10:26:31.167484 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:31.167492 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:31.167499 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:31.167507 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:31.167514 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:31.167522 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:31.167529 | orchestrator | 2025-06-19 10:26:31.167541 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:26:31.167549 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:26:31.167563 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-19 10:26:31.167571 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-19 10:26:31.167578 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-19 10:26:31.167586 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-19 10:26:31.167594 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-19 10:26:31.167602 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-19 10:26:31.167609 | orchestrator | 2025-06-19 10:26:31.167617 | orchestrator | 2025-06-19 10:26:31.167625 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:26:31.167633 | orchestrator | Thursday 19 June 2025 10:26:30 +0000 (0:00:00.476) 0:04:09.734 ********* 2025-06-19 10:26:31.167640 | orchestrator | =============================================================================== 2025-06-19 10:26:31.167648 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 44.75s 2025-06-19 10:26:31.167656 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 43.09s 2025-06-19 10:26:31.167664 | orchestrator | kubectl : Install required packages ------------------------------------ 15.79s 2025-06-19 10:26:31.167671 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.93s 2025-06-19 10:26:31.167679 | orchestrator | Manage labels ---------------------------------------------------------- 13.18s 2025-06-19 10:26:31.167687 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.60s 2025-06-19 10:26:31.167695 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.10s 2025-06-19 10:26:31.167702 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.45s 2025-06-19 10:26:31.167710 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.60s 2025-06-19 10:26:31.167718 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.43s 2025-06-19 10:26:31.167725 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.19s 2025-06-19 10:26:31.167733 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.79s 2025-06-19 10:26:31.167741 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.78s 2025-06-19 10:26:31.167749 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.37s 2025-06-19 10:26:31.167756 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.28s 2025-06-19 10:26:31.167764 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.16s 2025-06-19 10:26:31.167772 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.07s 2025-06-19 10:26:31.167780 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.07s 2025-06-19 10:26:31.167787 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.02s 2025-06-19 10:26:31.167795 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.86s 2025-06-19 10:26:31.167803 | orchestrator | 2025-06-19 10:26:31 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:34.183678 | orchestrator | 2025-06-19 10:26:34 | INFO  | Task f13740f1-b8bd-414f-97bd-4e4383a64c81 is in state STARTED 2025-06-19 10:26:34.184725 | orchestrator | 2025-06-19 10:26:34 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:34.185129 | orchestrator | 2025-06-19 10:26:34 | INFO  | Task ad102e5b-8672-4b83-a0d0-50b3c80106db is in state STARTED 2025-06-19 10:26:34.185675 | orchestrator | 2025-06-19 10:26:34 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:34.186302 | orchestrator | 2025-06-19 10:26:34 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:34.189669 | orchestrator | 2025-06-19 10:26:34 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:34.190498 | orchestrator | 2025-06-19 10:26:34 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:37.218208 | orchestrator | 2025-06-19 10:26:37 | INFO  | Task f13740f1-b8bd-414f-97bd-4e4383a64c81 is in state STARTED 2025-06-19 10:26:37.220984 | orchestrator | 2025-06-19 10:26:37 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:37.223897 | orchestrator | 2025-06-19 10:26:37 | INFO  | Task ad102e5b-8672-4b83-a0d0-50b3c80106db is in state STARTED 2025-06-19 10:26:37.225406 | orchestrator | 2025-06-19 10:26:37 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:37.232050 | orchestrator | 2025-06-19 10:26:37 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state STARTED 2025-06-19 10:26:37.234213 | orchestrator | 2025-06-19 10:26:37 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:37.234407 | orchestrator | 2025-06-19 10:26:37 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:40.275423 | orchestrator | 2025-06-19 10:26:40 | INFO  | Task f13740f1-b8bd-414f-97bd-4e4383a64c81 is in state STARTED 2025-06-19 10:26:40.276719 | orchestrator | 2025-06-19 10:26:40 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:40.281323 | orchestrator | 2025-06-19 10:26:40 | INFO  | Task ad102e5b-8672-4b83-a0d0-50b3c80106db is in state SUCCESS 2025-06-19 10:26:40.284161 | orchestrator | 2025-06-19 10:26:40 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:40.286000 | orchestrator | 2025-06-19 10:26:40 | INFO  | Task 6160d743-f696-4ba3-b032-efd168df75ca is in state SUCCESS 2025-06-19 10:26:40.287825 | orchestrator | 2025-06-19 10:26:40.287852 | orchestrator | 2025-06-19 10:26:40.287864 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-19 10:26:40.287876 | orchestrator | 2025-06-19 10:26:40.287887 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-19 10:26:40.287898 | orchestrator | Thursday 19 June 2025 10:26:34 +0000 (0:00:00.129) 0:00:00.129 ********* 2025-06-19 10:26:40.287910 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-19 10:26:40.287921 | orchestrator | 2025-06-19 10:26:40.287932 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-19 10:26:40.287943 | orchestrator | Thursday 19 June 2025 10:26:35 +0000 (0:00:00.809) 0:00:00.939 ********* 2025-06-19 10:26:40.287954 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:40.287965 | orchestrator | 2025-06-19 10:26:40.287976 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-19 10:26:40.287987 | orchestrator | Thursday 19 June 2025 10:26:36 +0000 (0:00:01.231) 0:00:02.170 ********* 2025-06-19 10:26:40.287998 | orchestrator | changed: [testbed-manager] 2025-06-19 10:26:40.288009 | orchestrator | 2025-06-19 10:26:40.288019 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:26:40.288031 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:26:40.288070 | orchestrator | 2025-06-19 10:26:40.288081 | orchestrator | 2025-06-19 10:26:40.288092 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:26:40.288102 | orchestrator | Thursday 19 June 2025 10:26:37 +0000 (0:00:00.469) 0:00:02.640 ********* 2025-06-19 10:26:40.288113 | orchestrator | =============================================================================== 2025-06-19 10:26:40.288124 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.23s 2025-06-19 10:26:40.288134 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.81s 2025-06-19 10:26:40.288145 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.47s 2025-06-19 10:26:40.288155 | orchestrator | 2025-06-19 10:26:40.290373 | orchestrator | 2025-06-19 10:26:40.290429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:26:40.290435 | orchestrator | 2025-06-19 10:26:40.290440 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:26:40.290444 | orchestrator | Thursday 19 June 2025 10:25:42 +0000 (0:00:00.232) 0:00:00.232 ********* 2025-06-19 10:26:40.290448 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:40.290453 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:40.290457 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:40.290460 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:26:40.290464 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:26:40.290468 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:26:40.290472 | orchestrator | 2025-06-19 10:26:40.290475 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:26:40.290479 | orchestrator | Thursday 19 June 2025 10:25:43 +0000 (0:00:00.529) 0:00:00.761 ********* 2025-06-19 10:26:40.290483 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-19 10:26:40.290487 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-19 10:26:40.290491 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-19 10:26:40.290495 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-19 10:26:40.290499 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-19 10:26:40.290503 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-19 10:26:40.290507 | orchestrator | 2025-06-19 10:26:40.290510 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-19 10:26:40.290514 | orchestrator | 2025-06-19 10:26:40.290518 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-19 10:26:40.290527 | orchestrator | Thursday 19 June 2025 10:25:44 +0000 (0:00:00.792) 0:00:01.554 ********* 2025-06-19 10:26:40.290531 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:26:40.290536 | orchestrator | 2025-06-19 10:26:40.290540 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-19 10:26:40.290544 | orchestrator | Thursday 19 June 2025 10:25:46 +0000 (0:00:01.802) 0:00:03.356 ********* 2025-06-19 10:26:40.290548 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-19 10:26:40.290552 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-19 10:26:40.290556 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-19 10:26:40.290560 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-19 10:26:40.290563 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-19 10:26:40.290567 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-19 10:26:40.290571 | orchestrator | 2025-06-19 10:26:40.290575 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-19 10:26:40.290644 | orchestrator | Thursday 19 June 2025 10:25:47 +0000 (0:00:01.294) 0:00:04.651 ********* 2025-06-19 10:26:40.290651 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-19 10:26:40.290654 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-19 10:26:40.290658 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-19 10:26:40.290662 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-19 10:26:40.290665 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-19 10:26:40.290669 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-19 10:26:40.290673 | orchestrator | 2025-06-19 10:26:40.290676 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-19 10:26:40.290680 | orchestrator | Thursday 19 June 2025 10:25:49 +0000 (0:00:01.652) 0:00:06.304 ********* 2025-06-19 10:26:40.290684 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-19 10:26:40.290688 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:40.290692 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-19 10:26:40.290695 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:40.290699 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-19 10:26:40.290703 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-19 10:26:40.290707 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:40.290710 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-19 10:26:40.290714 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:40.290718 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:40.290721 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-19 10:26:40.290725 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:40.290729 | orchestrator | 2025-06-19 10:26:40.290733 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-19 10:26:40.290736 | orchestrator | Thursday 19 June 2025 10:25:50 +0000 (0:00:01.443) 0:00:07.747 ********* 2025-06-19 10:26:40.290740 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:40.290744 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:40.290747 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:40.290751 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:40.290755 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:40.290758 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:40.290762 | orchestrator | 2025-06-19 10:26:40.290766 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-19 10:26:40.290769 | orchestrator | Thursday 19 June 2025 10:25:51 +0000 (0:00:00.958) 0:00:08.705 ********* 2025-06-19 10:26:40.290784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290807 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290820 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290848 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290852 | orchestrator | 2025-06-19 10:26:40.290856 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-19 10:26:40.290860 | orchestrator | Thursday 19 June 2025 10:25:53 +0000 (0:00:01.618) 0:00:10.324 ********* 2025-06-19 10:26:40.290864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290889 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290896 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.290994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291001 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291014 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291022 | orchestrator | 2025-06-19 10:26:40.291026 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-19 10:26:40.291030 | orchestrator | Thursday 19 June 2025 10:25:56 +0000 (0:00:03.355) 0:00:13.679 ********* 2025-06-19 10:26:40.291034 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:26:40.291038 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:26:40.291042 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:26:40.291045 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:40.291049 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:40.291422 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:40.291434 | orchestrator | 2025-06-19 10:26:40.291438 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-19 10:26:40.291442 | orchestrator | Thursday 19 June 2025 10:25:57 +0000 (0:00:01.349) 0:00:15.028 ********* 2025-06-19 10:26:40.291447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291509 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291513 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-19 10:26:40.291517 | orchestrator | 2025-06-19 10:26:40.291521 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-19 10:26:40.291525 | orchestrator | Thursday 19 June 2025 10:26:00 +0000 (0:00:02.309) 0:00:17.338 ********* 2025-06-19 10:26:40.291528 | orchestrator | 2025-06-19 10:26:40.291532 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-19 10:26:40.291536 | orchestrator | Thursday 19 June 2025 10:26:00 +0000 (0:00:00.258) 0:00:17.596 ********* 2025-06-19 10:26:40.291540 | orchestrator | 2025-06-19 10:26:40.291543 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-19 10:26:40.291547 | orchestrator | Thursday 19 June 2025 10:26:00 +0000 (0:00:00.295) 0:00:17.892 ********* 2025-06-19 10:26:40.291551 | orchestrator | 2025-06-19 10:26:40.291554 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-19 10:26:40.291558 | orchestrator | Thursday 19 June 2025 10:26:00 +0000 (0:00:00.166) 0:00:18.058 ********* 2025-06-19 10:26:40.291562 | orchestrator | 2025-06-19 10:26:40.291566 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-19 10:26:40.291570 | orchestrator | Thursday 19 June 2025 10:26:00 +0000 (0:00:00.132) 0:00:18.191 ********* 2025-06-19 10:26:40.291573 | orchestrator | 2025-06-19 10:26:40.291577 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-19 10:26:40.291581 | orchestrator | Thursday 19 June 2025 10:26:01 +0000 (0:00:00.138) 0:00:18.330 ********* 2025-06-19 10:26:40.291584 | orchestrator | 2025-06-19 10:26:40.291590 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-19 10:26:40.291594 | orchestrator | Thursday 19 June 2025 10:26:01 +0000 (0:00:00.259) 0:00:18.589 ********* 2025-06-19 10:26:40.291598 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:40.291602 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:40.291605 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:40.291609 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:26:40.291613 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:26:40.291616 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:26:40.291620 | orchestrator | 2025-06-19 10:26:40.291624 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-19 10:26:40.291628 | orchestrator | Thursday 19 June 2025 10:26:08 +0000 (0:00:07.396) 0:00:25.985 ********* 2025-06-19 10:26:40.291632 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:26:40.291636 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:26:40.291640 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:26:40.291649 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:26:40.291653 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:26:40.291657 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:26:40.291661 | orchestrator | 2025-06-19 10:26:40.291665 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-19 10:26:40.291669 | orchestrator | Thursday 19 June 2025 10:26:10 +0000 (0:00:01.669) 0:00:27.655 ********* 2025-06-19 10:26:40.291673 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:40.291678 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:40.291682 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:26:40.291686 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:40.291690 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:26:40.291694 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:26:40.291698 | orchestrator | 2025-06-19 10:26:40.291702 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-19 10:26:40.291706 | orchestrator | Thursday 19 June 2025 10:26:14 +0000 (0:00:04.481) 0:00:32.137 ********* 2025-06-19 10:26:40.291711 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-19 10:26:40.291715 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-19 10:26:40.291719 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-19 10:26:40.291723 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-19 10:26:40.291727 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-19 10:26:40.291734 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-19 10:26:40.291739 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-19 10:26:40.291743 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-19 10:26:40.291747 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-19 10:26:40.291751 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-19 10:26:40.291755 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-19 10:26:40.291759 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-19 10:26:40.291763 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-19 10:26:40.291767 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-19 10:26:40.291771 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-19 10:26:40.291775 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-19 10:26:40.291780 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-19 10:26:40.291784 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-19 10:26:40.291788 | orchestrator | 2025-06-19 10:26:40.291792 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-19 10:26:40.291796 | orchestrator | Thursday 19 June 2025 10:26:22 +0000 (0:00:07.994) 0:00:40.131 ********* 2025-06-19 10:26:40.291801 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-19 10:26:40.291809 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:40.291813 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-19 10:26:40.291817 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:40.291821 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-19 10:26:40.291826 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:40.291830 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-19 10:26:40.291834 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-19 10:26:40.291838 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-19 10:26:40.291842 | orchestrator | 2025-06-19 10:26:40.291846 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-19 10:26:40.291852 | orchestrator | Thursday 19 June 2025 10:26:25 +0000 (0:00:03.015) 0:00:43.147 ********* 2025-06-19 10:26:40.291857 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-19 10:26:40.291861 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:26:40.291865 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-19 10:26:40.291869 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:26:40.291874 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-19 10:26:40.291878 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:26:40.291882 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-19 10:26:40.291886 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-19 10:26:40.291890 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-19 10:26:40.291894 | orchestrator | 2025-06-19 10:26:40.291898 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-19 10:26:40.291902 | orchestrator | Thursday 19 June 2025 10:26:29 +0000 (0:00:03.748) 0:00:46.896 ********* 2025-06-19 10:26:40.291906 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:26:40.291910 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:26:40.291914 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:26:40.291919 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:26:40.291923 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:26:40.291927 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:26:40.291931 | orchestrator | 2025-06-19 10:26:40.291935 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:26:40.291939 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-19 10:26:40.291944 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-19 10:26:40.291948 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-19 10:26:40.291952 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:26:40.291957 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:26:40.291963 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:26:40.291968 | orchestrator | 2025-06-19 10:26:40.291972 | orchestrator | 2025-06-19 10:26:40.291976 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:26:40.291981 | orchestrator | Thursday 19 June 2025 10:26:39 +0000 (0:00:09.846) 0:00:56.742 ********* 2025-06-19 10:26:40.291985 | orchestrator | =============================================================================== 2025-06-19 10:26:40.291989 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.33s 2025-06-19 10:26:40.291993 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.99s 2025-06-19 10:26:40.292000 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 7.40s 2025-06-19 10:26:40.292004 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.75s 2025-06-19 10:26:40.292007 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.36s 2025-06-19 10:26:40.292011 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.02s 2025-06-19 10:26:40.292015 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.31s 2025-06-19 10:26:40.292018 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.80s 2025-06-19 10:26:40.292022 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.67s 2025-06-19 10:26:40.292026 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.65s 2025-06-19 10:26:40.292029 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.62s 2025-06-19 10:26:40.292033 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.44s 2025-06-19 10:26:40.292037 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.35s 2025-06-19 10:26:40.292041 | orchestrator | module-load : Load modules ---------------------------------------------- 1.29s 2025-06-19 10:26:40.292044 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.25s 2025-06-19 10:26:40.292048 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.96s 2025-06-19 10:26:40.292052 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-06-19 10:26:40.292055 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2025-06-19 10:26:40.292059 | orchestrator | 2025-06-19 10:26:40 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:40.292063 | orchestrator | 2025-06-19 10:26:40 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:43.336358 | orchestrator | 2025-06-19 10:26:43 | INFO  | Task f13740f1-b8bd-414f-97bd-4e4383a64c81 is in state SUCCESS 2025-06-19 10:26:43.339384 | orchestrator | 2025-06-19 10:26:43 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:26:43.340127 | orchestrator | 2025-06-19 10:26:43 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:43.341310 | orchestrator | 2025-06-19 10:26:43 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:43.341905 | orchestrator | 2025-06-19 10:26:43 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:43.342212 | orchestrator | 2025-06-19 10:26:43 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:46.381805 | orchestrator | 2025-06-19 10:26:46 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:26:46.382097 | orchestrator | 2025-06-19 10:26:46 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:46.385410 | orchestrator | 2025-06-19 10:26:46 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:46.385945 | orchestrator | 2025-06-19 10:26:46 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:46.386103 | orchestrator | 2025-06-19 10:26:46 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:49.436154 | orchestrator | 2025-06-19 10:26:49 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:26:49.436326 | orchestrator | 2025-06-19 10:26:49 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:49.436578 | orchestrator | 2025-06-19 10:26:49 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:49.437389 | orchestrator | 2025-06-19 10:26:49 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:49.437413 | orchestrator | 2025-06-19 10:26:49 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:52.494162 | orchestrator | 2025-06-19 10:26:52 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:26:52.495722 | orchestrator | 2025-06-19 10:26:52 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:52.496891 | orchestrator | 2025-06-19 10:26:52 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:52.498371 | orchestrator | 2025-06-19 10:26:52 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:52.498516 | orchestrator | 2025-06-19 10:26:52 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:55.550926 | orchestrator | 2025-06-19 10:26:55 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:26:55.552996 | orchestrator | 2025-06-19 10:26:55 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:55.554718 | orchestrator | 2025-06-19 10:26:55 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:55.556732 | orchestrator | 2025-06-19 10:26:55 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:55.556757 | orchestrator | 2025-06-19 10:26:55 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:26:58.587146 | orchestrator | 2025-06-19 10:26:58 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:26:58.589685 | orchestrator | 2025-06-19 10:26:58 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:26:58.589790 | orchestrator | 2025-06-19 10:26:58 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:26:58.591812 | orchestrator | 2025-06-19 10:26:58 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:26:58.591857 | orchestrator | 2025-06-19 10:26:58 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:01.639411 | orchestrator | 2025-06-19 10:27:01 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:01.645134 | orchestrator | 2025-06-19 10:27:01 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:01.647189 | orchestrator | 2025-06-19 10:27:01 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:01.649187 | orchestrator | 2025-06-19 10:27:01 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:01.650331 | orchestrator | 2025-06-19 10:27:01 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:04.690717 | orchestrator | 2025-06-19 10:27:04 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:04.693267 | orchestrator | 2025-06-19 10:27:04 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:04.696061 | orchestrator | 2025-06-19 10:27:04 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:04.699239 | orchestrator | 2025-06-19 10:27:04 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:04.699268 | orchestrator | 2025-06-19 10:27:04 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:07.734435 | orchestrator | 2025-06-19 10:27:07 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:07.736929 | orchestrator | 2025-06-19 10:27:07 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:07.739408 | orchestrator | 2025-06-19 10:27:07 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:07.741903 | orchestrator | 2025-06-19 10:27:07 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:07.741928 | orchestrator | 2025-06-19 10:27:07 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:10.782666 | orchestrator | 2025-06-19 10:27:10 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:10.782775 | orchestrator | 2025-06-19 10:27:10 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:10.783360 | orchestrator | 2025-06-19 10:27:10 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:10.786847 | orchestrator | 2025-06-19 10:27:10 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:10.786885 | orchestrator | 2025-06-19 10:27:10 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:13.815953 | orchestrator | 2025-06-19 10:27:13 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:13.816033 | orchestrator | 2025-06-19 10:27:13 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:13.816676 | orchestrator | 2025-06-19 10:27:13 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:13.818208 | orchestrator | 2025-06-19 10:27:13 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:13.818234 | orchestrator | 2025-06-19 10:27:13 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:16.850590 | orchestrator | 2025-06-19 10:27:16 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:16.851150 | orchestrator | 2025-06-19 10:27:16 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:16.852904 | orchestrator | 2025-06-19 10:27:16 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:16.855781 | orchestrator | 2025-06-19 10:27:16 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:16.855805 | orchestrator | 2025-06-19 10:27:16 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:19.888658 | orchestrator | 2025-06-19 10:27:19 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:19.889117 | orchestrator | 2025-06-19 10:27:19 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:19.889815 | orchestrator | 2025-06-19 10:27:19 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:19.890236 | orchestrator | 2025-06-19 10:27:19 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:19.890822 | orchestrator | 2025-06-19 10:27:19 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:22.921665 | orchestrator | 2025-06-19 10:27:22 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:22.923240 | orchestrator | 2025-06-19 10:27:22 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:22.925510 | orchestrator | 2025-06-19 10:27:22 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:22.927643 | orchestrator | 2025-06-19 10:27:22 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:22.928424 | orchestrator | 2025-06-19 10:27:22 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:25.971402 | orchestrator | 2025-06-19 10:27:25 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:25.972263 | orchestrator | 2025-06-19 10:27:25 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:25.974511 | orchestrator | 2025-06-19 10:27:25 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:25.976672 | orchestrator | 2025-06-19 10:27:25 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:25.976920 | orchestrator | 2025-06-19 10:27:25 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:29.031324 | orchestrator | 2025-06-19 10:27:29 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:29.032216 | orchestrator | 2025-06-19 10:27:29 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:29.034444 | orchestrator | 2025-06-19 10:27:29 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:29.036340 | orchestrator | 2025-06-19 10:27:29 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:29.036376 | orchestrator | 2025-06-19 10:27:29 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:32.080645 | orchestrator | 2025-06-19 10:27:32 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:32.080752 | orchestrator | 2025-06-19 10:27:32 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:32.081039 | orchestrator | 2025-06-19 10:27:32 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:32.082174 | orchestrator | 2025-06-19 10:27:32 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:32.082201 | orchestrator | 2025-06-19 10:27:32 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:35.125613 | orchestrator | 2025-06-19 10:27:35 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:35.126707 | orchestrator | 2025-06-19 10:27:35 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:35.130381 | orchestrator | 2025-06-19 10:27:35 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:35.130424 | orchestrator | 2025-06-19 10:27:35 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:35.130437 | orchestrator | 2025-06-19 10:27:35 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:38.187615 | orchestrator | 2025-06-19 10:27:38 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:38.190122 | orchestrator | 2025-06-19 10:27:38 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:38.191810 | orchestrator | 2025-06-19 10:27:38 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:38.193833 | orchestrator | 2025-06-19 10:27:38 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:38.194998 | orchestrator | 2025-06-19 10:27:38 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:41.233769 | orchestrator | 2025-06-19 10:27:41 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:41.234533 | orchestrator | 2025-06-19 10:27:41 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:41.236227 | orchestrator | 2025-06-19 10:27:41 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:41.236293 | orchestrator | 2025-06-19 10:27:41 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:41.236307 | orchestrator | 2025-06-19 10:27:41 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:44.283049 | orchestrator | 2025-06-19 10:27:44 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:44.284700 | orchestrator | 2025-06-19 10:27:44 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:44.285400 | orchestrator | 2025-06-19 10:27:44 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:44.286319 | orchestrator | 2025-06-19 10:27:44 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:44.286341 | orchestrator | 2025-06-19 10:27:44 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:47.332534 | orchestrator | 2025-06-19 10:27:47 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:47.333605 | orchestrator | 2025-06-19 10:27:47 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:47.334490 | orchestrator | 2025-06-19 10:27:47 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:47.335845 | orchestrator | 2025-06-19 10:27:47 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:47.336070 | orchestrator | 2025-06-19 10:27:47 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:50.365734 | orchestrator | 2025-06-19 10:27:50 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:50.365955 | orchestrator | 2025-06-19 10:27:50 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:50.367599 | orchestrator | 2025-06-19 10:27:50 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:50.367643 | orchestrator | 2025-06-19 10:27:50 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:50.367656 | orchestrator | 2025-06-19 10:27:50 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:53.417157 | orchestrator | 2025-06-19 10:27:53 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:53.422593 | orchestrator | 2025-06-19 10:27:53 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:53.424076 | orchestrator | 2025-06-19 10:27:53 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:53.426364 | orchestrator | 2025-06-19 10:27:53 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:53.427333 | orchestrator | 2025-06-19 10:27:53 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:56.458205 | orchestrator | 2025-06-19 10:27:56 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:56.459340 | orchestrator | 2025-06-19 10:27:56 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:56.461709 | orchestrator | 2025-06-19 10:27:56 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:56.462284 | orchestrator | 2025-06-19 10:27:56 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:56.462498 | orchestrator | 2025-06-19 10:27:56 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:27:59.504542 | orchestrator | 2025-06-19 10:27:59 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:27:59.505713 | orchestrator | 2025-06-19 10:27:59 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:27:59.507060 | orchestrator | 2025-06-19 10:27:59 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:27:59.508563 | orchestrator | 2025-06-19 10:27:59 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:27:59.509015 | orchestrator | 2025-06-19 10:27:59 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:02.566300 | orchestrator | 2025-06-19 10:28:02 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:02.566395 | orchestrator | 2025-06-19 10:28:02 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:02.567545 | orchestrator | 2025-06-19 10:28:02 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:28:02.569223 | orchestrator | 2025-06-19 10:28:02 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:02.569266 | orchestrator | 2025-06-19 10:28:02 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:05.600920 | orchestrator | 2025-06-19 10:28:05 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:05.601147 | orchestrator | 2025-06-19 10:28:05 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:05.602076 | orchestrator | 2025-06-19 10:28:05 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:28:05.602929 | orchestrator | 2025-06-19 10:28:05 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:05.602960 | orchestrator | 2025-06-19 10:28:05 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:08.639508 | orchestrator | 2025-06-19 10:28:08 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:08.639633 | orchestrator | 2025-06-19 10:28:08 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:08.643122 | orchestrator | 2025-06-19 10:28:08 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:28:08.644052 | orchestrator | 2025-06-19 10:28:08 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:08.644098 | orchestrator | 2025-06-19 10:28:08 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:11.680265 | orchestrator | 2025-06-19 10:28:11 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:11.682376 | orchestrator | 2025-06-19 10:28:11 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:11.683895 | orchestrator | 2025-06-19 10:28:11 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:28:11.685363 | orchestrator | 2025-06-19 10:28:11 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:11.685390 | orchestrator | 2025-06-19 10:28:11 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:14.722494 | orchestrator | 2025-06-19 10:28:14 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:14.722954 | orchestrator | 2025-06-19 10:28:14 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:14.723642 | orchestrator | 2025-06-19 10:28:14 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:28:14.724468 | orchestrator | 2025-06-19 10:28:14 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:14.724916 | orchestrator | 2025-06-19 10:28:14 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:17.751805 | orchestrator | 2025-06-19 10:28:17 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:17.752041 | orchestrator | 2025-06-19 10:28:17 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:17.752480 | orchestrator | 2025-06-19 10:28:17 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state STARTED 2025-06-19 10:28:17.753082 | orchestrator | 2025-06-19 10:28:17 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:17.753103 | orchestrator | 2025-06-19 10:28:17 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:20.797188 | orchestrator | 2025-06-19 10:28:20 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:20.799553 | orchestrator | 2025-06-19 10:28:20 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:20.801269 | orchestrator | 2025-06-19 10:28:20 | INFO  | Task a82b4610-ae56-4c14-84bc-49c5340bd866 is in state SUCCESS 2025-06-19 10:28:20.803335 | orchestrator | 2025-06-19 10:28:20.803386 | orchestrator | 2025-06-19 10:28:20.803399 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-19 10:28:20.803411 | orchestrator | 2025-06-19 10:28:20.803423 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-19 10:28:20.803434 | orchestrator | Thursday 19 June 2025 10:26:34 +0000 (0:00:00.128) 0:00:00.128 ********* 2025-06-19 10:28:20.803446 | orchestrator | ok: [testbed-manager] 2025-06-19 10:28:20.803458 | orchestrator | 2025-06-19 10:28:20.803469 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-19 10:28:20.803481 | orchestrator | Thursday 19 June 2025 10:26:35 +0000 (0:00:00.596) 0:00:00.725 ********* 2025-06-19 10:28:20.803492 | orchestrator | ok: [testbed-manager] 2025-06-19 10:28:20.803502 | orchestrator | 2025-06-19 10:28:20.803515 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-19 10:28:20.803533 | orchestrator | Thursday 19 June 2025 10:26:35 +0000 (0:00:00.633) 0:00:01.358 ********* 2025-06-19 10:28:20.803552 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-19 10:28:20.803570 | orchestrator | 2025-06-19 10:28:20.803588 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-19 10:28:20.803608 | orchestrator | Thursday 19 June 2025 10:26:36 +0000 (0:00:00.662) 0:00:02.021 ********* 2025-06-19 10:28:20.803629 | orchestrator | changed: [testbed-manager] 2025-06-19 10:28:20.803650 | orchestrator | 2025-06-19 10:28:20.803669 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-19 10:28:20.803680 | orchestrator | Thursday 19 June 2025 10:26:37 +0000 (0:00:01.259) 0:00:03.281 ********* 2025-06-19 10:28:20.803691 | orchestrator | changed: [testbed-manager] 2025-06-19 10:28:20.803702 | orchestrator | 2025-06-19 10:28:20.803713 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-19 10:28:20.803724 | orchestrator | Thursday 19 June 2025 10:26:38 +0000 (0:00:00.738) 0:00:04.019 ********* 2025-06-19 10:28:20.803767 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-19 10:28:20.803778 | orchestrator | 2025-06-19 10:28:20.803790 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-19 10:28:20.803800 | orchestrator | Thursday 19 June 2025 10:26:40 +0000 (0:00:01.791) 0:00:05.810 ********* 2025-06-19 10:28:20.803811 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-19 10:28:20.803822 | orchestrator | 2025-06-19 10:28:20.803833 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-19 10:28:20.803844 | orchestrator | Thursday 19 June 2025 10:26:40 +0000 (0:00:00.877) 0:00:06.688 ********* 2025-06-19 10:28:20.803855 | orchestrator | ok: [testbed-manager] 2025-06-19 10:28:20.803865 | orchestrator | 2025-06-19 10:28:20.803887 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-19 10:28:20.803898 | orchestrator | Thursday 19 June 2025 10:26:41 +0000 (0:00:00.411) 0:00:07.099 ********* 2025-06-19 10:28:20.803927 | orchestrator | ok: [testbed-manager] 2025-06-19 10:28:20.803939 | orchestrator | 2025-06-19 10:28:20.803952 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:28:20.803964 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:28:20.803978 | orchestrator | 2025-06-19 10:28:20.803991 | orchestrator | 2025-06-19 10:28:20.804003 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:28:20.804015 | orchestrator | Thursday 19 June 2025 10:26:41 +0000 (0:00:00.340) 0:00:07.440 ********* 2025-06-19 10:28:20.804027 | orchestrator | =============================================================================== 2025-06-19 10:28:20.804039 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.79s 2025-06-19 10:28:20.804051 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.26s 2025-06-19 10:28:20.804063 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.88s 2025-06-19 10:28:20.804075 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.74s 2025-06-19 10:28:20.804087 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.66s 2025-06-19 10:28:20.804099 | orchestrator | Create .kube directory -------------------------------------------------- 0.63s 2025-06-19 10:28:20.804111 | orchestrator | Get home directory of operator user ------------------------------------- 0.60s 2025-06-19 10:28:20.804123 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.41s 2025-06-19 10:28:20.804135 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.34s 2025-06-19 10:28:20.804147 | orchestrator | 2025-06-19 10:28:20.804159 | orchestrator | 2025-06-19 10:28:20.804171 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-19 10:28:20.804183 | orchestrator | 2025-06-19 10:28:20.804195 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-19 10:28:20.804208 | orchestrator | Thursday 19 June 2025 10:25:59 +0000 (0:00:00.180) 0:00:00.180 ********* 2025-06-19 10:28:20.804220 | orchestrator | ok: [localhost] => { 2025-06-19 10:28:20.804233 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-19 10:28:20.804246 | orchestrator | } 2025-06-19 10:28:20.804258 | orchestrator | 2025-06-19 10:28:20.804269 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-19 10:28:20.804280 | orchestrator | Thursday 19 June 2025 10:25:59 +0000 (0:00:00.038) 0:00:00.218 ********* 2025-06-19 10:28:20.804291 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-19 10:28:20.804303 | orchestrator | ...ignoring 2025-06-19 10:28:20.804314 | orchestrator | 2025-06-19 10:28:20.804325 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-19 10:28:20.804335 | orchestrator | Thursday 19 June 2025 10:26:02 +0000 (0:00:02.992) 0:00:03.211 ********* 2025-06-19 10:28:20.804346 | orchestrator | skipping: [localhost] 2025-06-19 10:28:20.804356 | orchestrator | 2025-06-19 10:28:20.804381 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-19 10:28:20.804393 | orchestrator | Thursday 19 June 2025 10:26:02 +0000 (0:00:00.209) 0:00:03.420 ********* 2025-06-19 10:28:20.804404 | orchestrator | ok: [localhost] 2025-06-19 10:28:20.804414 | orchestrator | 2025-06-19 10:28:20.804425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:28:20.804436 | orchestrator | 2025-06-19 10:28:20.804447 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:28:20.804457 | orchestrator | Thursday 19 June 2025 10:26:03 +0000 (0:00:00.521) 0:00:03.942 ********* 2025-06-19 10:28:20.804468 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:28:20.804487 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:28:20.804498 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:28:20.804509 | orchestrator | 2025-06-19 10:28:20.804519 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:28:20.804530 | orchestrator | Thursday 19 June 2025 10:26:04 +0000 (0:00:00.918) 0:00:04.860 ********* 2025-06-19 10:28:20.804540 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-19 10:28:20.804552 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-19 10:28:20.804562 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-19 10:28:20.804573 | orchestrator | 2025-06-19 10:28:20.804591 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-19 10:28:20.804610 | orchestrator | 2025-06-19 10:28:20.804632 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-19 10:28:20.804651 | orchestrator | Thursday 19 June 2025 10:26:04 +0000 (0:00:00.491) 0:00:05.352 ********* 2025-06-19 10:28:20.804662 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:28:20.804673 | orchestrator | 2025-06-19 10:28:20.804684 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-19 10:28:20.804694 | orchestrator | Thursday 19 June 2025 10:26:05 +0000 (0:00:00.862) 0:00:06.214 ********* 2025-06-19 10:28:20.804705 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:28:20.804715 | orchestrator | 2025-06-19 10:28:20.804749 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-19 10:28:20.804765 | orchestrator | Thursday 19 June 2025 10:26:06 +0000 (0:00:01.453) 0:00:07.668 ********* 2025-06-19 10:28:20.804776 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:28:20.804787 | orchestrator | 2025-06-19 10:28:20.804797 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-19 10:28:20.804814 | orchestrator | Thursday 19 June 2025 10:26:07 +0000 (0:00:00.438) 0:00:08.106 ********* 2025-06-19 10:28:20.804824 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:28:20.804835 | orchestrator | 2025-06-19 10:28:20.804846 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-19 10:28:20.804857 | orchestrator | Thursday 19 June 2025 10:26:08 +0000 (0:00:00.772) 0:00:08.879 ********* 2025-06-19 10:28:20.804867 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:28:20.804878 | orchestrator | 2025-06-19 10:28:20.804889 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-19 10:28:20.804899 | orchestrator | Thursday 19 June 2025 10:26:08 +0000 (0:00:00.366) 0:00:09.246 ********* 2025-06-19 10:28:20.804910 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:28:20.804920 | orchestrator | 2025-06-19 10:28:20.804931 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-19 10:28:20.804941 | orchestrator | Thursday 19 June 2025 10:26:09 +0000 (0:00:00.752) 0:00:09.999 ********* 2025-06-19 10:28:20.804952 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:28:20.804963 | orchestrator | 2025-06-19 10:28:20.804973 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-19 10:28:20.804984 | orchestrator | Thursday 19 June 2025 10:26:10 +0000 (0:00:01.311) 0:00:11.311 ********* 2025-06-19 10:28:20.804995 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:28:20.805005 | orchestrator | 2025-06-19 10:28:20.805016 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-19 10:28:20.805027 | orchestrator | Thursday 19 June 2025 10:26:11 +0000 (0:00:01.248) 0:00:12.559 ********* 2025-06-19 10:28:20.805037 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:28:20.805048 | orchestrator | 2025-06-19 10:28:20.805059 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-19 10:28:20.805070 | orchestrator | Thursday 19 June 2025 10:26:12 +0000 (0:00:00.907) 0:00:13.466 ********* 2025-06-19 10:28:20.805080 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:28:20.805098 | orchestrator | 2025-06-19 10:28:20.805109 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-19 10:28:20.805120 | orchestrator | Thursday 19 June 2025 10:26:13 +0000 (0:00:00.482) 0:00:13.949 ********* 2025-06-19 10:28:20.805146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:28:20.805162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:28:20.805181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:28:20.805193 | orchestrator | 2025-06-19 10:28:20.805204 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-19 10:28:20.805215 | orchestrator | Thursday 19 June 2025 10:26:14 +0000 (0:00:00.896) 0:00:14.846 ********* 2025-06-19 10:28:20.805227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:28:20.805254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:28:20.805268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:28:20.805279 | orchestrator | 2025-06-19 10:28:20.805290 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-19 10:28:20.805301 | orchestrator | Thursday 19 June 2025 10:26:16 +0000 (0:00:02.140) 0:00:16.986 ********* 2025-06-19 10:28:20.805312 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-19 10:28:20.805323 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-19 10:28:20.805334 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-19 10:28:20.805344 | orchestrator | 2025-06-19 10:28:20.805355 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-19 10:28:20.805366 | orchestrator | Thursday 19 June 2025 10:26:18 +0000 (0:00:02.361) 0:00:19.347 ********* 2025-06-19 10:28:20.805376 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-19 10:28:20.805387 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-19 10:28:20.805398 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-19 10:28:20.805414 | orchestrator | 2025-06-19 10:28:20.805425 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-19 10:28:20.805436 | orchestrator | Thursday 19 June 2025 10:26:21 +0000 (0:00:03.392) 0:00:22.740 ********* 2025-06-19 10:28:20.805447 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-19 10:28:20.805458 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-19 10:28:20.805468 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-19 10:28:20.805479 | orchestrator | 2025-06-19 10:28:20.805489 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-19 10:28:20.805500 | orchestrator | Thursday 19 June 2025 10:26:24 +0000 (0:00:02.380) 0:00:25.121 ********* 2025-06-19 10:28:20.805510 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-19 10:28:20.805521 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-19 10:28:20.805532 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-19 10:28:20.805542 | orchestrator | 2025-06-19 10:28:20.805553 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-19 10:28:20.805563 | orchestrator | Thursday 19 June 2025 10:26:26 +0000 (0:00:02.475) 0:00:27.596 ********* 2025-06-19 10:28:20.805574 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-19 10:28:20.805650 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-19 10:28:20.805683 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-19 10:28:20.805703 | orchestrator | 2025-06-19 10:28:20.805721 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-19 10:28:20.805762 | orchestrator | Thursday 19 June 2025 10:26:28 +0000 (0:00:01.819) 0:00:29.416 ********* 2025-06-19 10:28:20.805783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-19 10:28:20.805794 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-19 10:28:20.805805 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-19 10:28:20.805815 | orchestrator | 2025-06-19 10:28:20.805826 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-19 10:28:20.805837 | orchestrator | Thursday 19 June 2025 10:26:30 +0000 (0:00:01.873) 0:00:31.289 ********* 2025-06-19 10:28:20.805847 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:28:20.805858 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:28:20.805869 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:28:20.805879 | orchestrator | 2025-06-19 10:28:20.805890 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-19 10:28:20.805901 | orchestrator | Thursday 19 June 2025 10:26:32 +0000 (0:00:01.586) 0:00:32.876 ********* 2025-06-19 10:28:20.805913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:28:20.805939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:28:20.805953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:28:20.805965 | orchestrator | 2025-06-19 10:28:20.805976 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-19 10:28:20.805986 | orchestrator | Thursday 19 June 2025 10:26:34 +0000 (0:00:02.027) 0:00:34.904 ********* 2025-06-19 10:28:20.805997 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:28:20.806008 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:28:20.806107 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:28:20.806121 | orchestrator | 2025-06-19 10:28:20.806131 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-19 10:28:20.806143 | orchestrator | Thursday 19 June 2025 10:26:34 +0000 (0:00:00.848) 0:00:35.752 ********* 2025-06-19 10:28:20.806153 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:28:20.806164 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:28:20.806175 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:28:20.806185 | orchestrator | 2025-06-19 10:28:20.806196 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-19 10:28:20.806207 | orchestrator | Thursday 19 June 2025 10:26:42 +0000 (0:00:07.408) 0:00:43.160 ********* 2025-06-19 10:28:20.806218 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:28:20.806228 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:28:20.806239 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:28:20.806250 | orchestrator | 2025-06-19 10:28:20.806261 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-19 10:28:20.806271 | orchestrator | 2025-06-19 10:28:20.806282 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-19 10:28:20.806301 | orchestrator | Thursday 19 June 2025 10:26:42 +0000 (0:00:00.337) 0:00:43.498 ********* 2025-06-19 10:28:20.806312 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:28:20.806323 | orchestrator | 2025-06-19 10:28:20.806333 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-19 10:28:20.806344 | orchestrator | Thursday 19 June 2025 10:26:43 +0000 (0:00:00.690) 0:00:44.188 ********* 2025-06-19 10:28:20.806355 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:28:20.806366 | orchestrator | 2025-06-19 10:28:20.806377 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-19 10:28:20.806388 | orchestrator | Thursday 19 June 2025 10:26:43 +0000 (0:00:00.235) 0:00:44.423 ********* 2025-06-19 10:28:20.806399 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:28:20.806410 | orchestrator | 2025-06-19 10:28:20.806421 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-19 10:28:20.806432 | orchestrator | Thursday 19 June 2025 10:26:50 +0000 (0:00:07.115) 0:00:51.539 ********* 2025-06-19 10:28:20.806442 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:28:20.806453 | orchestrator | 2025-06-19 10:28:20.806464 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-19 10:28:20.806475 | orchestrator | 2025-06-19 10:28:20.806486 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-19 10:28:20.806496 | orchestrator | Thursday 19 June 2025 10:27:40 +0000 (0:00:50.119) 0:01:41.658 ********* 2025-06-19 10:28:20.806507 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:28:20.806518 | orchestrator | 2025-06-19 10:28:20.806534 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-19 10:28:20.806545 | orchestrator | Thursday 19 June 2025 10:27:41 +0000 (0:00:00.610) 0:01:42.269 ********* 2025-06-19 10:28:20.806556 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:28:20.806567 | orchestrator | 2025-06-19 10:28:20.806578 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-19 10:28:20.806589 | orchestrator | Thursday 19 June 2025 10:27:41 +0000 (0:00:00.208) 0:01:42.477 ********* 2025-06-19 10:28:20.806600 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:28:20.806610 | orchestrator | 2025-06-19 10:28:20.806621 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-19 10:28:20.806632 | orchestrator | Thursday 19 June 2025 10:27:43 +0000 (0:00:02.161) 0:01:44.639 ********* 2025-06-19 10:28:20.806643 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:28:20.806654 | orchestrator | 2025-06-19 10:28:20.806673 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-19 10:28:20.806692 | orchestrator | 2025-06-19 10:28:20.806713 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-19 10:28:20.806754 | orchestrator | Thursday 19 June 2025 10:27:58 +0000 (0:00:14.972) 0:01:59.612 ********* 2025-06-19 10:28:20.806771 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:28:20.806782 | orchestrator | 2025-06-19 10:28:20.806792 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-19 10:28:20.806803 | orchestrator | Thursday 19 June 2025 10:27:59 +0000 (0:00:00.585) 0:02:00.197 ********* 2025-06-19 10:28:20.806814 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:28:20.806824 | orchestrator | 2025-06-19 10:28:20.806835 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-19 10:28:20.806846 | orchestrator | Thursday 19 June 2025 10:27:59 +0000 (0:00:00.234) 0:02:00.432 ********* 2025-06-19 10:28:20.806856 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:28:20.806867 | orchestrator | 2025-06-19 10:28:20.806877 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-19 10:28:20.806888 | orchestrator | Thursday 19 June 2025 10:28:06 +0000 (0:00:06.587) 0:02:07.020 ********* 2025-06-19 10:28:20.806899 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:28:20.806909 | orchestrator | 2025-06-19 10:28:20.806920 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-19 10:28:20.806931 | orchestrator | 2025-06-19 10:28:20.806949 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-19 10:28:20.806960 | orchestrator | Thursday 19 June 2025 10:28:16 +0000 (0:00:10.433) 0:02:17.453 ********* 2025-06-19 10:28:20.806970 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:28:20.806981 | orchestrator | 2025-06-19 10:28:20.806992 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-19 10:28:20.807002 | orchestrator | Thursday 19 June 2025 10:28:17 +0000 (0:00:01.048) 0:02:18.502 ********* 2025-06-19 10:28:20.807013 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-19 10:28:20.807024 | orchestrator | enable_outward_rabbitmq_True 2025-06-19 10:28:20.807034 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-19 10:28:20.807045 | orchestrator | outward_rabbitmq_restart 2025-06-19 10:28:20.807056 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:28:20.807066 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:28:20.807077 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:28:20.807087 | orchestrator | 2025-06-19 10:28:20.807105 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-19 10:28:20.807116 | orchestrator | skipping: no hosts matched 2025-06-19 10:28:20.807126 | orchestrator | 2025-06-19 10:28:20.807137 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-19 10:28:20.807148 | orchestrator | skipping: no hosts matched 2025-06-19 10:28:20.807158 | orchestrator | 2025-06-19 10:28:20.807169 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-19 10:28:20.807180 | orchestrator | skipping: no hosts matched 2025-06-19 10:28:20.807190 | orchestrator | 2025-06-19 10:28:20.807201 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:28:20.807212 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-19 10:28:20.807223 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-19 10:28:20.807235 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:28:20.807245 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:28:20.807256 | orchestrator | 2025-06-19 10:28:20.807267 | orchestrator | 2025-06-19 10:28:20.807277 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:28:20.807288 | orchestrator | Thursday 19 June 2025 10:28:20 +0000 (0:00:02.420) 0:02:20.922 ********* 2025-06-19 10:28:20.807299 | orchestrator | =============================================================================== 2025-06-19 10:28:20.807309 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 75.53s 2025-06-19 10:28:20.807320 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.87s 2025-06-19 10:28:20.807330 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.41s 2025-06-19 10:28:20.807341 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.39s 2025-06-19 10:28:20.807352 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.99s 2025-06-19 10:28:20.807363 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.48s 2025-06-19 10:28:20.807378 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.42s 2025-06-19 10:28:20.807389 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.38s 2025-06-19 10:28:20.807399 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.36s 2025-06-19 10:28:20.807410 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.14s 2025-06-19 10:28:20.807427 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 2.03s 2025-06-19 10:28:20.807438 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.89s 2025-06-19 10:28:20.807449 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.87s 2025-06-19 10:28:20.807459 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.82s 2025-06-19 10:28:20.807470 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.59s 2025-06-19 10:28:20.807480 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.45s 2025-06-19 10:28:20.807491 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.31s 2025-06-19 10:28:20.807502 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.25s 2025-06-19 10:28:20.807512 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.05s 2025-06-19 10:28:20.807523 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.92s 2025-06-19 10:28:20.807534 | orchestrator | 2025-06-19 10:28:20 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:20.807545 | orchestrator | 2025-06-19 10:28:20 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:23.842996 | orchestrator | 2025-06-19 10:28:23 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:23.844762 | orchestrator | 2025-06-19 10:28:23 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:23.847383 | orchestrator | 2025-06-19 10:28:23 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:23.847476 | orchestrator | 2025-06-19 10:28:23 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:26.888981 | orchestrator | 2025-06-19 10:28:26 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:26.889996 | orchestrator | 2025-06-19 10:28:26 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:26.894745 | orchestrator | 2025-06-19 10:28:26 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:26.894776 | orchestrator | 2025-06-19 10:28:26 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:29.941564 | orchestrator | 2025-06-19 10:28:29 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:29.944145 | orchestrator | 2025-06-19 10:28:29 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:29.944240 | orchestrator | 2025-06-19 10:28:29 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:29.944256 | orchestrator | 2025-06-19 10:28:29 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:32.998201 | orchestrator | 2025-06-19 10:28:32 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:32.999643 | orchestrator | 2025-06-19 10:28:32 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:33.001385 | orchestrator | 2025-06-19 10:28:32 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:33.001656 | orchestrator | 2025-06-19 10:28:32 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:36.048684 | orchestrator | 2025-06-19 10:28:36 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:36.050760 | orchestrator | 2025-06-19 10:28:36 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:36.051974 | orchestrator | 2025-06-19 10:28:36 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:36.052005 | orchestrator | 2025-06-19 10:28:36 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:39.085338 | orchestrator | 2025-06-19 10:28:39 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:39.085441 | orchestrator | 2025-06-19 10:28:39 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:39.086322 | orchestrator | 2025-06-19 10:28:39 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:39.086350 | orchestrator | 2025-06-19 10:28:39 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:42.124164 | orchestrator | 2025-06-19 10:28:42 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:42.126357 | orchestrator | 2025-06-19 10:28:42 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:42.128237 | orchestrator | 2025-06-19 10:28:42 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:42.128345 | orchestrator | 2025-06-19 10:28:42 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:45.177720 | orchestrator | 2025-06-19 10:28:45 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:45.181984 | orchestrator | 2025-06-19 10:28:45 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:45.183555 | orchestrator | 2025-06-19 10:28:45 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:45.183817 | orchestrator | 2025-06-19 10:28:45 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:48.223854 | orchestrator | 2025-06-19 10:28:48 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:48.225712 | orchestrator | 2025-06-19 10:28:48 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:48.229488 | orchestrator | 2025-06-19 10:28:48 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:48.229969 | orchestrator | 2025-06-19 10:28:48 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:51.269955 | orchestrator | 2025-06-19 10:28:51 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:51.270925 | orchestrator | 2025-06-19 10:28:51 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:51.271621 | orchestrator | 2025-06-19 10:28:51 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:51.271655 | orchestrator | 2025-06-19 10:28:51 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:54.326146 | orchestrator | 2025-06-19 10:28:54 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:54.329335 | orchestrator | 2025-06-19 10:28:54 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:54.332529 | orchestrator | 2025-06-19 10:28:54 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:54.334360 | orchestrator | 2025-06-19 10:28:54 | INFO  | Task 0f1b278f-6d7a-4f24-9d60-ba1e54d6329f is in state STARTED 2025-06-19 10:28:54.334405 | orchestrator | 2025-06-19 10:28:54 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:28:57.367816 | orchestrator | 2025-06-19 10:28:57 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:28:57.373225 | orchestrator | 2025-06-19 10:28:57 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:28:57.376395 | orchestrator | 2025-06-19 10:28:57 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:28:57.376897 | orchestrator | 2025-06-19 10:28:57 | INFO  | Task 0f1b278f-6d7a-4f24-9d60-ba1e54d6329f is in state STARTED 2025-06-19 10:28:57.376923 | orchestrator | 2025-06-19 10:28:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:00.401082 | orchestrator | 2025-06-19 10:29:00 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:29:00.401220 | orchestrator | 2025-06-19 10:29:00 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:00.401665 | orchestrator | 2025-06-19 10:29:00 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:00.404118 | orchestrator | 2025-06-19 10:29:00 | INFO  | Task 0f1b278f-6d7a-4f24-9d60-ba1e54d6329f is in state STARTED 2025-06-19 10:29:00.404247 | orchestrator | 2025-06-19 10:29:00 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:03.434593 | orchestrator | 2025-06-19 10:29:03 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:29:03.435667 | orchestrator | 2025-06-19 10:29:03 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:03.436872 | orchestrator | 2025-06-19 10:29:03 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:03.437467 | orchestrator | 2025-06-19 10:29:03 | INFO  | Task 0f1b278f-6d7a-4f24-9d60-ba1e54d6329f is in state STARTED 2025-06-19 10:29:03.438706 | orchestrator | 2025-06-19 10:29:03 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:06.479303 | orchestrator | 2025-06-19 10:29:06 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:29:06.479437 | orchestrator | 2025-06-19 10:29:06 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:06.479465 | orchestrator | 2025-06-19 10:29:06 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:06.479815 | orchestrator | 2025-06-19 10:29:06 | INFO  | Task 0f1b278f-6d7a-4f24-9d60-ba1e54d6329f is in state STARTED 2025-06-19 10:29:06.479852 | orchestrator | 2025-06-19 10:29:06 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:09.506092 | orchestrator | 2025-06-19 10:29:09 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state STARTED 2025-06-19 10:29:09.506678 | orchestrator | 2025-06-19 10:29:09 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:09.507202 | orchestrator | 2025-06-19 10:29:09 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:09.507855 | orchestrator | 2025-06-19 10:29:09 | INFO  | Task 0f1b278f-6d7a-4f24-9d60-ba1e54d6329f is in state SUCCESS 2025-06-19 10:29:09.507957 | orchestrator | 2025-06-19 10:29:09 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:12.562111 | orchestrator | 2025-06-19 10:29:12.562222 | orchestrator | None 2025-06-19 10:29:12.562238 | orchestrator | 2025-06-19 10:29:12.562250 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:29:12.562262 | orchestrator | 2025-06-19 10:29:12.562273 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:29:12.562286 | orchestrator | Thursday 19 June 2025 10:26:44 +0000 (0:00:00.176) 0:00:00.176 ********* 2025-06-19 10:29:12.562297 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:29:12.562405 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:29:12.562419 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:29:12.562430 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.562441 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.562476 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.562488 | orchestrator | 2025-06-19 10:29:12.562499 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:29:12.562536 | orchestrator | Thursday 19 June 2025 10:26:45 +0000 (0:00:01.007) 0:00:01.184 ********* 2025-06-19 10:29:12.562548 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-19 10:29:12.562560 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-19 10:29:12.562571 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-19 10:29:12.562582 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-19 10:29:12.562593 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-19 10:29:12.562603 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-19 10:29:12.562614 | orchestrator | 2025-06-19 10:29:12.562625 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-19 10:29:12.562636 | orchestrator | 2025-06-19 10:29:12.562647 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-19 10:29:12.562658 | orchestrator | Thursday 19 June 2025 10:26:46 +0000 (0:00:01.046) 0:00:02.231 ********* 2025-06-19 10:29:12.562670 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:29:12.562682 | orchestrator | 2025-06-19 10:29:12.562692 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-19 10:29:12.562703 | orchestrator | Thursday 19 June 2025 10:26:48 +0000 (0:00:01.117) 0:00:03.348 ********* 2025-06-19 10:29:12.562717 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562742 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562809 | orchestrator | 2025-06-19 10:29:12.562840 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-19 10:29:12.562852 | orchestrator | Thursday 19 June 2025 10:26:49 +0000 (0:00:01.156) 0:00:04.505 ********* 2025-06-19 10:29:12.562863 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562897 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.562930 | orchestrator | 2025-06-19 10:29:12.562941 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-19 10:29:12.562984 | orchestrator | Thursday 19 June 2025 10:26:51 +0000 (0:00:01.860) 0:00:06.365 ********* 2025-06-19 10:29:12.563003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563015 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563043 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563090 | orchestrator | 2025-06-19 10:29:12.563102 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-19 10:29:12.563113 | orchestrator | Thursday 19 June 2025 10:26:52 +0000 (0:00:01.344) 0:00:07.710 ********* 2025-06-19 10:29:12.563125 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563148 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563205 | orchestrator | 2025-06-19 10:29:12.563223 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-19 10:29:12.563235 | orchestrator | Thursday 19 June 2025 10:26:54 +0000 (0:00:01.631) 0:00:09.341 ********* 2025-06-19 10:29:12.563246 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563258 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563269 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.563315 | orchestrator | 2025-06-19 10:29:12.563326 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-19 10:29:12.563345 | orchestrator | Thursday 19 June 2025 10:26:56 +0000 (0:00:01.959) 0:00:11.301 ********* 2025-06-19 10:29:12.563356 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:29:12.563368 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:29:12.563384 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:29:12.563396 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:29:12.563407 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:29:12.563418 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:29:12.563429 | orchestrator | 2025-06-19 10:29:12.563440 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-19 10:29:12.563480 | orchestrator | Thursday 19 June 2025 10:26:58 +0000 (0:00:02.410) 0:00:13.711 ********* 2025-06-19 10:29:12.563492 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-19 10:29:12.563504 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-19 10:29:12.563514 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-19 10:29:12.563525 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-19 10:29:12.563536 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-19 10:29:12.563546 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-19 10:29:12.563557 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-19 10:29:12.563568 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-19 10:29:12.563585 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-19 10:29:12.563596 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-19 10:29:12.563607 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-19 10:29:12.563618 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-19 10:29:12.563629 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-19 10:29:12.563640 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-19 10:29:12.563651 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-19 10:29:12.563662 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-19 10:29:12.563673 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-19 10:29:12.563684 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-19 10:29:12.563695 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-19 10:29:12.563707 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-19 10:29:12.563717 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-19 10:29:12.563728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-19 10:29:12.563739 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-19 10:29:12.563750 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-19 10:29:12.563769 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-19 10:29:12.563780 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-19 10:29:12.563791 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-19 10:29:12.563801 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-19 10:29:12.563812 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-19 10:29:12.563823 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-19 10:29:12.563834 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-19 10:29:12.563845 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-19 10:29:12.563856 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-19 10:29:12.563866 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-19 10:29:12.563877 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-19 10:29:12.563893 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-19 10:29:12.563904 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-19 10:29:12.563915 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-19 10:29:12.563926 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-19 10:29:12.563937 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-19 10:29:12.563948 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-19 10:29:12.563959 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-19 10:29:12.563969 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-19 10:29:12.563981 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-19 10:29:12.563997 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-19 10:29:12.564008 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-19 10:29:12.564019 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-19 10:29:12.564029 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-19 10:29:12.564040 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-19 10:29:12.564051 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-19 10:29:12.564062 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-19 10:29:12.564073 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-19 10:29:12.564084 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-19 10:29:12.564101 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-19 10:29:12.564112 | orchestrator | 2025-06-19 10:29:12.564123 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-19 10:29:12.564134 | orchestrator | Thursday 19 June 2025 10:27:17 +0000 (0:00:19.353) 0:00:33.065 ********* 2025-06-19 10:29:12.564145 | orchestrator | 2025-06-19 10:29:12.564156 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-19 10:29:12.564167 | orchestrator | Thursday 19 June 2025 10:27:18 +0000 (0:00:00.249) 0:00:33.315 ********* 2025-06-19 10:29:12.564178 | orchestrator | 2025-06-19 10:29:12.564188 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-19 10:29:12.564199 | orchestrator | Thursday 19 June 2025 10:27:18 +0000 (0:00:00.069) 0:00:33.384 ********* 2025-06-19 10:29:12.564210 | orchestrator | 2025-06-19 10:29:12.564221 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-19 10:29:12.564232 | orchestrator | Thursday 19 June 2025 10:27:18 +0000 (0:00:00.133) 0:00:33.518 ********* 2025-06-19 10:29:12.564242 | orchestrator | 2025-06-19 10:29:12.564253 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-19 10:29:12.564264 | orchestrator | Thursday 19 June 2025 10:27:18 +0000 (0:00:00.130) 0:00:33.648 ********* 2025-06-19 10:29:12.564275 | orchestrator | 2025-06-19 10:29:12.564285 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-19 10:29:12.564296 | orchestrator | Thursday 19 June 2025 10:27:18 +0000 (0:00:00.122) 0:00:33.771 ********* 2025-06-19 10:29:12.564307 | orchestrator | 2025-06-19 10:29:12.564317 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-19 10:29:12.564328 | orchestrator | Thursday 19 June 2025 10:27:18 +0000 (0:00:00.064) 0:00:33.836 ********* 2025-06-19 10:29:12.564339 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:29:12.564349 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:29:12.564360 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.564371 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:29:12.564382 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.564393 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.564403 | orchestrator | 2025-06-19 10:29:12.564415 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-19 10:29:12.564426 | orchestrator | Thursday 19 June 2025 10:27:20 +0000 (0:00:01.769) 0:00:35.605 ********* 2025-06-19 10:29:12.564436 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:29:12.564447 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:29:12.564476 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:29:12.564487 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:29:12.564497 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:29:12.564508 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:29:12.564519 | orchestrator | 2025-06-19 10:29:12.564534 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-19 10:29:12.564545 | orchestrator | 2025-06-19 10:29:12.564556 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-19 10:29:12.564567 | orchestrator | Thursday 19 June 2025 10:28:01 +0000 (0:00:41.286) 0:01:16.891 ********* 2025-06-19 10:29:12.564577 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:29:12.564588 | orchestrator | 2025-06-19 10:29:12.564599 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-19 10:29:12.564610 | orchestrator | Thursday 19 June 2025 10:28:02 +0000 (0:00:00.653) 0:01:17.545 ********* 2025-06-19 10:29:12.564621 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:29:12.564632 | orchestrator | 2025-06-19 10:29:12.564642 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-19 10:29:12.564660 | orchestrator | Thursday 19 June 2025 10:28:02 +0000 (0:00:00.499) 0:01:18.045 ********* 2025-06-19 10:29:12.564670 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.564681 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.564692 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.564702 | orchestrator | 2025-06-19 10:29:12.564713 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-19 10:29:12.564724 | orchestrator | Thursday 19 June 2025 10:28:03 +0000 (0:00:00.939) 0:01:18.985 ********* 2025-06-19 10:29:12.564735 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.564746 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.564762 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.564773 | orchestrator | 2025-06-19 10:29:12.564783 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-19 10:29:12.564794 | orchestrator | Thursday 19 June 2025 10:28:04 +0000 (0:00:00.355) 0:01:19.341 ********* 2025-06-19 10:29:12.564805 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.564816 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.564826 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.564837 | orchestrator | 2025-06-19 10:29:12.564847 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-19 10:29:12.564858 | orchestrator | Thursday 19 June 2025 10:28:04 +0000 (0:00:00.384) 0:01:19.725 ********* 2025-06-19 10:29:12.564869 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.564879 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.564890 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.564900 | orchestrator | 2025-06-19 10:29:12.564911 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-19 10:29:12.564922 | orchestrator | Thursday 19 June 2025 10:28:04 +0000 (0:00:00.322) 0:01:20.047 ********* 2025-06-19 10:29:12.564932 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.564943 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.564953 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.564964 | orchestrator | 2025-06-19 10:29:12.564975 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-19 10:29:12.564985 | orchestrator | Thursday 19 June 2025 10:28:05 +0000 (0:00:00.803) 0:01:20.851 ********* 2025-06-19 10:29:12.564996 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565007 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565017 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565028 | orchestrator | 2025-06-19 10:29:12.565039 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-19 10:29:12.565049 | orchestrator | Thursday 19 June 2025 10:28:05 +0000 (0:00:00.301) 0:01:21.152 ********* 2025-06-19 10:29:12.565060 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565071 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565082 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565092 | orchestrator | 2025-06-19 10:29:12.565103 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-19 10:29:12.565114 | orchestrator | Thursday 19 June 2025 10:28:06 +0000 (0:00:00.325) 0:01:21.477 ********* 2025-06-19 10:29:12.565124 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565135 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565145 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565156 | orchestrator | 2025-06-19 10:29:12.565167 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-19 10:29:12.565178 | orchestrator | Thursday 19 June 2025 10:28:06 +0000 (0:00:00.283) 0:01:21.761 ********* 2025-06-19 10:29:12.565188 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565199 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565209 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565220 | orchestrator | 2025-06-19 10:29:12.565231 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-19 10:29:12.565241 | orchestrator | Thursday 19 June 2025 10:28:07 +0000 (0:00:00.514) 0:01:22.275 ********* 2025-06-19 10:29:12.565264 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565275 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565286 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565296 | orchestrator | 2025-06-19 10:29:12.565307 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-19 10:29:12.565318 | orchestrator | Thursday 19 June 2025 10:28:07 +0000 (0:00:00.287) 0:01:22.563 ********* 2025-06-19 10:29:12.565329 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565339 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565350 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565361 | orchestrator | 2025-06-19 10:29:12.565371 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-19 10:29:12.565382 | orchestrator | Thursday 19 June 2025 10:28:07 +0000 (0:00:00.268) 0:01:22.831 ********* 2025-06-19 10:29:12.565393 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565403 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565414 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565425 | orchestrator | 2025-06-19 10:29:12.565435 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-19 10:29:12.565446 | orchestrator | Thursday 19 June 2025 10:28:07 +0000 (0:00:00.298) 0:01:23.130 ********* 2025-06-19 10:29:12.565514 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565525 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565542 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565553 | orchestrator | 2025-06-19 10:29:12.565564 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-19 10:29:12.565575 | orchestrator | Thursday 19 June 2025 10:28:08 +0000 (0:00:00.498) 0:01:23.628 ********* 2025-06-19 10:29:12.565585 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565596 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565607 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565617 | orchestrator | 2025-06-19 10:29:12.565628 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-19 10:29:12.565639 | orchestrator | Thursday 19 June 2025 10:28:08 +0000 (0:00:00.319) 0:01:23.947 ********* 2025-06-19 10:29:12.565650 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565661 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565671 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565682 | orchestrator | 2025-06-19 10:29:12.565692 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-19 10:29:12.565702 | orchestrator | Thursday 19 June 2025 10:28:09 +0000 (0:00:00.308) 0:01:24.256 ********* 2025-06-19 10:29:12.565711 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565721 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565730 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565740 | orchestrator | 2025-06-19 10:29:12.565749 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-19 10:29:12.565759 | orchestrator | Thursday 19 June 2025 10:28:09 +0000 (0:00:00.320) 0:01:24.577 ********* 2025-06-19 10:29:12.565768 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.565778 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.565793 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.565803 | orchestrator | 2025-06-19 10:29:12.565813 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-19 10:29:12.565823 | orchestrator | Thursday 19 June 2025 10:28:09 +0000 (0:00:00.488) 0:01:25.066 ********* 2025-06-19 10:29:12.565833 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:29:12.565843 | orchestrator | 2025-06-19 10:29:12.565852 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-19 10:29:12.565862 | orchestrator | Thursday 19 June 2025 10:28:10 +0000 (0:00:00.627) 0:01:25.693 ********* 2025-06-19 10:29:12.565871 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.565887 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.565897 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.565906 | orchestrator | 2025-06-19 10:29:12.565916 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-19 10:29:12.565926 | orchestrator | Thursday 19 June 2025 10:28:10 +0000 (0:00:00.431) 0:01:26.124 ********* 2025-06-19 10:29:12.565935 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.565945 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.565955 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.565964 | orchestrator | 2025-06-19 10:29:12.565973 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-19 10:29:12.565983 | orchestrator | Thursday 19 June 2025 10:28:11 +0000 (0:00:00.684) 0:01:26.809 ********* 2025-06-19 10:29:12.565993 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.566002 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.566012 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.566061 | orchestrator | 2025-06-19 10:29:12.566071 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-19 10:29:12.566081 | orchestrator | Thursday 19 June 2025 10:28:12 +0000 (0:00:00.535) 0:01:27.345 ********* 2025-06-19 10:29:12.566091 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.566101 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.566110 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.566120 | orchestrator | 2025-06-19 10:29:12.566129 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-19 10:29:12.566139 | orchestrator | Thursday 19 June 2025 10:28:12 +0000 (0:00:00.313) 0:01:27.658 ********* 2025-06-19 10:29:12.566149 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.566158 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.566168 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.566177 | orchestrator | 2025-06-19 10:29:12.566187 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-19 10:29:12.566197 | orchestrator | Thursday 19 June 2025 10:28:12 +0000 (0:00:00.356) 0:01:28.015 ********* 2025-06-19 10:29:12.566207 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.566216 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.566226 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.566235 | orchestrator | 2025-06-19 10:29:12.566245 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-19 10:29:12.566255 | orchestrator | Thursday 19 June 2025 10:28:13 +0000 (0:00:00.308) 0:01:28.323 ********* 2025-06-19 10:29:12.566264 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.566274 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.566284 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.566293 | orchestrator | 2025-06-19 10:29:12.566303 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-19 10:29:12.566313 | orchestrator | Thursday 19 June 2025 10:28:13 +0000 (0:00:00.655) 0:01:28.978 ********* 2025-06-19 10:29:12.566323 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.566332 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.566342 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.566351 | orchestrator | 2025-06-19 10:29:12.566361 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-19 10:29:12.566370 | orchestrator | Thursday 19 June 2025 10:28:14 +0000 (0:00:00.371) 0:01:29.350 ********* 2025-06-19 10:29:12.566386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', '2025-06-19 10:29:12 | INFO  | Task bd5b7f67-5dbc-49be-8319-9d5c099b40d3 is in state SUCCESS 2025-06-19 10:29:12.566513 | orchestrator | kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566576 | orchestrator | 2025-06-19 10:29:12.566586 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-19 10:29:12.566596 | orchestrator | Thursday 19 June 2025 10:28:15 +0000 (0:00:01.557) 0:01:30.907 ********* 2025-06-19 10:29:12.566606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566708 | orchestrator | 2025-06-19 10:29:12.566717 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-19 10:29:12.566727 | orchestrator | Thursday 19 June 2025 10:28:19 +0000 (0:00:04.277) 0:01:35.184 ********* 2025-06-19 10:29:12.566737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.566875 | orchestrator | 2025-06-19 10:29:12.566885 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-19 10:29:12.566895 | orchestrator | Thursday 19 June 2025 10:28:22 +0000 (0:00:02.286) 0:01:37.470 ********* 2025-06-19 10:29:12.566907 | orchestrator | 2025-06-19 10:29:12.566923 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-19 10:29:12.566939 | orchestrator | Thursday 19 June 2025 10:28:22 +0000 (0:00:00.070) 0:01:37.541 ********* 2025-06-19 10:29:12.566955 | orchestrator | 2025-06-19 10:29:12.566971 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-19 10:29:12.566987 | orchestrator | Thursday 19 June 2025 10:28:22 +0000 (0:00:00.070) 0:01:37.611 ********* 2025-06-19 10:29:12.566998 | orchestrator | 2025-06-19 10:29:12.567008 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-19 10:29:12.567027 | orchestrator | Thursday 19 June 2025 10:28:22 +0000 (0:00:00.070) 0:01:37.681 ********* 2025-06-19 10:29:12.567037 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:29:12.567046 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:29:12.567056 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:29:12.567065 | orchestrator | 2025-06-19 10:29:12.567075 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-19 10:29:12.567084 | orchestrator | Thursday 19 June 2025 10:28:29 +0000 (0:00:07.425) 0:01:45.106 ********* 2025-06-19 10:29:12.567093 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:29:12.567103 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:29:12.567112 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:29:12.567121 | orchestrator | 2025-06-19 10:29:12.567131 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-19 10:29:12.567140 | orchestrator | Thursday 19 June 2025 10:28:32 +0000 (0:00:02.324) 0:01:47.431 ********* 2025-06-19 10:29:12.567150 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:29:12.567159 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:29:12.567168 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:29:12.567178 | orchestrator | 2025-06-19 10:29:12.567187 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-19 10:29:12.567197 | orchestrator | Thursday 19 June 2025 10:28:34 +0000 (0:00:02.494) 0:01:49.926 ********* 2025-06-19 10:29:12.567206 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.567216 | orchestrator | 2025-06-19 10:29:12.567230 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-19 10:29:12.567240 | orchestrator | Thursday 19 June 2025 10:28:34 +0000 (0:00:00.115) 0:01:50.042 ********* 2025-06-19 10:29:12.567249 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.567259 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.567268 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.567277 | orchestrator | 2025-06-19 10:29:12.567287 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-19 10:29:12.567296 | orchestrator | Thursday 19 June 2025 10:28:35 +0000 (0:00:00.944) 0:01:50.986 ********* 2025-06-19 10:29:12.567306 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.567315 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.567325 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:29:12.567334 | orchestrator | 2025-06-19 10:29:12.567344 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-19 10:29:12.567353 | orchestrator | Thursday 19 June 2025 10:28:36 +0000 (0:00:00.630) 0:01:51.616 ********* 2025-06-19 10:29:12.567363 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.567372 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.567381 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.567391 | orchestrator | 2025-06-19 10:29:12.567400 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-19 10:29:12.567410 | orchestrator | Thursday 19 June 2025 10:28:37 +0000 (0:00:00.764) 0:01:52.380 ********* 2025-06-19 10:29:12.567419 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.567428 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:29:12.567438 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.567447 | orchestrator | 2025-06-19 10:29:12.567523 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-19 10:29:12.567546 | orchestrator | Thursday 19 June 2025 10:28:37 +0000 (0:00:00.601) 0:01:52.982 ********* 2025-06-19 10:29:12.567556 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.567565 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.567575 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.567584 | orchestrator | 2025-06-19 10:29:12.567594 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-19 10:29:12.567604 | orchestrator | Thursday 19 June 2025 10:28:38 +0000 (0:00:00.764) 0:01:53.746 ********* 2025-06-19 10:29:12.567613 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.567623 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.567639 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.567649 | orchestrator | 2025-06-19 10:29:12.567658 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-19 10:29:12.567668 | orchestrator | Thursday 19 June 2025 10:28:39 +0000 (0:00:00.719) 0:01:54.466 ********* 2025-06-19 10:29:12.567678 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.567687 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.567696 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.567706 | orchestrator | 2025-06-19 10:29:12.567715 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-19 10:29:12.567725 | orchestrator | Thursday 19 June 2025 10:28:39 +0000 (0:00:00.356) 0:01:54.822 ********* 2025-06-19 10:29:12.567735 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567746 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567757 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567767 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567777 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567792 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567802 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567812 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567834 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567844 | orchestrator | 2025-06-19 10:29:12.567853 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-19 10:29:12.567863 | orchestrator | Thursday 19 June 2025 10:28:41 +0000 (0:00:01.580) 0:01:56.403 ********* 2025-06-19 10:29:12.567873 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567883 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567893 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567903 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567937 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.567978 | orchestrator | 2025-06-19 10:29:12.567987 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-19 10:29:12.567996 | orchestrator | Thursday 19 June 2025 10:28:44 +0000 (0:00:03.303) 0:01:59.707 ********* 2025-06-19 10:29:12.568010 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.568019 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.568027 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.568035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.568043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.568051 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.568060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.568071 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.568079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:29:12.568092 | orchestrator | 2025-06-19 10:29:12.568101 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-19 10:29:12.568109 | orchestrator | Thursday 19 June 2025 10:28:47 +0000 (0:00:02.606) 0:02:02.313 ********* 2025-06-19 10:29:12.568116 | orchestrator | 2025-06-19 10:29:12.568124 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-19 10:29:12.568132 | orchestrator | Thursday 19 June 2025 10:28:47 +0000 (0:00:00.068) 0:02:02.382 ********* 2025-06-19 10:29:12.568140 | orchestrator | 2025-06-19 10:29:12.568148 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-19 10:29:12.568155 | orchestrator | Thursday 19 June 2025 10:28:47 +0000 (0:00:00.079) 0:02:02.462 ********* 2025-06-19 10:29:12.568163 | orchestrator | 2025-06-19 10:29:12.568171 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-19 10:29:12.568183 | orchestrator | Thursday 19 June 2025 10:28:47 +0000 (0:00:00.068) 0:02:02.531 ********* 2025-06-19 10:29:12.568191 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:29:12.568199 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:29:12.568207 | orchestrator | 2025-06-19 10:29:12.568215 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-19 10:29:12.568223 | orchestrator | Thursday 19 June 2025 10:28:53 +0000 (0:00:06.560) 0:02:09.092 ********* 2025-06-19 10:29:12.568231 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:29:12.568239 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:29:12.568246 | orchestrator | 2025-06-19 10:29:12.568254 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-19 10:29:12.568262 | orchestrator | Thursday 19 June 2025 10:29:00 +0000 (0:00:06.204) 0:02:15.296 ********* 2025-06-19 10:29:12.568270 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:29:12.568278 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:29:12.568286 | orchestrator | 2025-06-19 10:29:12.568294 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-19 10:29:12.568301 | orchestrator | Thursday 19 June 2025 10:29:06 +0000 (0:00:06.495) 0:02:21.792 ********* 2025-06-19 10:29:12.568309 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:29:12.568317 | orchestrator | 2025-06-19 10:29:12.568325 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-19 10:29:12.568333 | orchestrator | Thursday 19 June 2025 10:29:06 +0000 (0:00:00.112) 0:02:21.904 ********* 2025-06-19 10:29:12.568340 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.568348 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.568356 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.568364 | orchestrator | 2025-06-19 10:29:12.568371 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-19 10:29:12.568379 | orchestrator | Thursday 19 June 2025 10:29:07 +0000 (0:00:00.724) 0:02:22.628 ********* 2025-06-19 10:29:12.568387 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.568395 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.568403 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:29:12.568410 | orchestrator | 2025-06-19 10:29:12.568418 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-19 10:29:12.568426 | orchestrator | Thursday 19 June 2025 10:29:07 +0000 (0:00:00.561) 0:02:23.190 ********* 2025-06-19 10:29:12.568434 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.568441 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.568468 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.568478 | orchestrator | 2025-06-19 10:29:12.568486 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-19 10:29:12.568493 | orchestrator | Thursday 19 June 2025 10:29:09 +0000 (0:00:01.073) 0:02:24.263 ********* 2025-06-19 10:29:12.568501 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:29:12.568514 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:29:12.568522 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:29:12.568529 | orchestrator | 2025-06-19 10:29:12.568537 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-19 10:29:12.568545 | orchestrator | Thursday 19 June 2025 10:29:09 +0000 (0:00:00.605) 0:02:24.869 ********* 2025-06-19 10:29:12.568552 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.568560 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.568568 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.568576 | orchestrator | 2025-06-19 10:29:12.568583 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-19 10:29:12.568591 | orchestrator | Thursday 19 June 2025 10:29:10 +0000 (0:00:00.767) 0:02:25.637 ********* 2025-06-19 10:29:12.568599 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:29:12.568607 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:29:12.568614 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:29:12.568622 | orchestrator | 2025-06-19 10:29:12.568630 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:29:12.568638 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-19 10:29:12.568647 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-19 10:29:12.568654 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-19 10:29:12.568666 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:29:12.568675 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:29:12.568682 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:29:12.568690 | orchestrator | 2025-06-19 10:29:12.568698 | orchestrator | 2025-06-19 10:29:12.568706 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:29:12.568714 | orchestrator | Thursday 19 June 2025 10:29:11 +0000 (0:00:01.238) 0:02:26.875 ********* 2025-06-19 10:29:12.568721 | orchestrator | =============================================================================== 2025-06-19 10:29:12.568729 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 41.29s 2025-06-19 10:29:12.568737 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.35s 2025-06-19 10:29:12.568745 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.99s 2025-06-19 10:29:12.568752 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.99s 2025-06-19 10:29:12.568760 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.53s 2025-06-19 10:29:12.568773 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.28s 2025-06-19 10:29:12.568781 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.30s 2025-06-19 10:29:12.568789 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.61s 2025-06-19 10:29:12.568797 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.41s 2025-06-19 10:29:12.568804 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.29s 2025-06-19 10:29:12.568812 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.96s 2025-06-19 10:29:12.568820 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.86s 2025-06-19 10:29:12.568828 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.77s 2025-06-19 10:29:12.568840 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.63s 2025-06-19 10:29:12.568848 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.58s 2025-06-19 10:29:12.568856 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.56s 2025-06-19 10:29:12.568863 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.34s 2025-06-19 10:29:12.568906 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.24s 2025-06-19 10:29:12.568915 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.16s 2025-06-19 10:29:12.568923 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.12s 2025-06-19 10:29:12.568931 | orchestrator | 2025-06-19 10:29:12 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:12.569023 | orchestrator | 2025-06-19 10:29:12 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:12.569034 | orchestrator | 2025-06-19 10:29:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:15.606642 | orchestrator | 2025-06-19 10:29:15 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:15.607046 | orchestrator | 2025-06-19 10:29:15 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:15.607616 | orchestrator | 2025-06-19 10:29:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:18.644488 | orchestrator | 2025-06-19 10:29:18 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:18.645183 | orchestrator | 2025-06-19 10:29:18 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:18.645277 | orchestrator | 2025-06-19 10:29:18 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:21.687461 | orchestrator | 2025-06-19 10:29:21 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:21.688834 | orchestrator | 2025-06-19 10:29:21 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:21.688871 | orchestrator | 2025-06-19 10:29:21 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:24.727479 | orchestrator | 2025-06-19 10:29:24 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:24.727590 | orchestrator | 2025-06-19 10:29:24 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:24.727606 | orchestrator | 2025-06-19 10:29:24 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:27.771629 | orchestrator | 2025-06-19 10:29:27 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:27.772963 | orchestrator | 2025-06-19 10:29:27 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:27.773142 | orchestrator | 2025-06-19 10:29:27 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:30.821698 | orchestrator | 2025-06-19 10:29:30 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:30.824121 | orchestrator | 2025-06-19 10:29:30 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:30.824215 | orchestrator | 2025-06-19 10:29:30 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:33.862216 | orchestrator | 2025-06-19 10:29:33 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:33.863957 | orchestrator | 2025-06-19 10:29:33 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:33.863995 | orchestrator | 2025-06-19 10:29:33 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:36.907274 | orchestrator | 2025-06-19 10:29:36 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:36.908997 | orchestrator | 2025-06-19 10:29:36 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:36.909152 | orchestrator | 2025-06-19 10:29:36 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:39.981406 | orchestrator | 2025-06-19 10:29:39 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:39.982720 | orchestrator | 2025-06-19 10:29:39 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:39.982969 | orchestrator | 2025-06-19 10:29:39 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:43.028448 | orchestrator | 2025-06-19 10:29:43 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:43.029823 | orchestrator | 2025-06-19 10:29:43 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:43.029870 | orchestrator | 2025-06-19 10:29:43 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:46.080280 | orchestrator | 2025-06-19 10:29:46 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:46.082436 | orchestrator | 2025-06-19 10:29:46 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:46.082478 | orchestrator | 2025-06-19 10:29:46 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:49.135464 | orchestrator | 2025-06-19 10:29:49 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:49.136437 | orchestrator | 2025-06-19 10:29:49 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:49.136472 | orchestrator | 2025-06-19 10:29:49 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:52.194431 | orchestrator | 2025-06-19 10:29:52 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:52.195953 | orchestrator | 2025-06-19 10:29:52 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:52.195984 | orchestrator | 2025-06-19 10:29:52 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:55.241370 | orchestrator | 2025-06-19 10:29:55 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:55.241733 | orchestrator | 2025-06-19 10:29:55 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:55.241758 | orchestrator | 2025-06-19 10:29:55 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:29:58.289448 | orchestrator | 2025-06-19 10:29:58 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:29:58.290207 | orchestrator | 2025-06-19 10:29:58 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:29:58.290269 | orchestrator | 2025-06-19 10:29:58 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:01.338726 | orchestrator | 2025-06-19 10:30:01 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:01.338835 | orchestrator | 2025-06-19 10:30:01 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:01.338919 | orchestrator | 2025-06-19 10:30:01 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:04.378317 | orchestrator | 2025-06-19 10:30:04 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:04.381451 | orchestrator | 2025-06-19 10:30:04 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:04.381506 | orchestrator | 2025-06-19 10:30:04 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:07.426314 | orchestrator | 2025-06-19 10:30:07 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:07.426405 | orchestrator | 2025-06-19 10:30:07 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:07.426421 | orchestrator | 2025-06-19 10:30:07 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:10.463604 | orchestrator | 2025-06-19 10:30:10 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:10.464707 | orchestrator | 2025-06-19 10:30:10 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:10.464747 | orchestrator | 2025-06-19 10:30:10 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:13.510999 | orchestrator | 2025-06-19 10:30:13 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:13.511663 | orchestrator | 2025-06-19 10:30:13 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:13.511697 | orchestrator | 2025-06-19 10:30:13 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:16.550617 | orchestrator | 2025-06-19 10:30:16 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:16.552644 | orchestrator | 2025-06-19 10:30:16 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:16.553054 | orchestrator | 2025-06-19 10:30:16 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:19.598692 | orchestrator | 2025-06-19 10:30:19 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:19.599516 | orchestrator | 2025-06-19 10:30:19 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:19.599744 | orchestrator | 2025-06-19 10:30:19 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:22.643382 | orchestrator | 2025-06-19 10:30:22 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:22.646472 | orchestrator | 2025-06-19 10:30:22 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:22.646589 | orchestrator | 2025-06-19 10:30:22 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:25.685398 | orchestrator | 2025-06-19 10:30:25 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:25.685524 | orchestrator | 2025-06-19 10:30:25 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:25.685541 | orchestrator | 2025-06-19 10:30:25 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:28.733597 | orchestrator | 2025-06-19 10:30:28 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:28.733703 | orchestrator | 2025-06-19 10:30:28 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:28.733719 | orchestrator | 2025-06-19 10:30:28 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:31.767310 | orchestrator | 2025-06-19 10:30:31 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:31.769098 | orchestrator | 2025-06-19 10:30:31 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:31.769171 | orchestrator | 2025-06-19 10:30:31 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:34.817169 | orchestrator | 2025-06-19 10:30:34 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:34.823500 | orchestrator | 2025-06-19 10:30:34 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:34.823540 | orchestrator | 2025-06-19 10:30:34 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:37.869169 | orchestrator | 2025-06-19 10:30:37 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:37.871508 | orchestrator | 2025-06-19 10:30:37 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:37.871547 | orchestrator | 2025-06-19 10:30:37 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:40.928840 | orchestrator | 2025-06-19 10:30:40 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:40.929278 | orchestrator | 2025-06-19 10:30:40 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:40.929329 | orchestrator | 2025-06-19 10:30:40 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:43.976510 | orchestrator | 2025-06-19 10:30:43 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:43.977202 | orchestrator | 2025-06-19 10:30:43 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:43.977336 | orchestrator | 2025-06-19 10:30:43 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:47.031303 | orchestrator | 2025-06-19 10:30:47 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:47.031411 | orchestrator | 2025-06-19 10:30:47 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:47.031428 | orchestrator | 2025-06-19 10:30:47 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:50.064565 | orchestrator | 2025-06-19 10:30:50 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:50.065011 | orchestrator | 2025-06-19 10:30:50 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:50.065088 | orchestrator | 2025-06-19 10:30:50 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:53.108529 | orchestrator | 2025-06-19 10:30:53 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:53.108638 | orchestrator | 2025-06-19 10:30:53 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:53.108654 | orchestrator | 2025-06-19 10:30:53 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:56.155706 | orchestrator | 2025-06-19 10:30:56 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:56.155889 | orchestrator | 2025-06-19 10:30:56 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:56.155910 | orchestrator | 2025-06-19 10:30:56 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:30:59.205899 | orchestrator | 2025-06-19 10:30:59 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:30:59.208386 | orchestrator | 2025-06-19 10:30:59 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:30:59.208422 | orchestrator | 2025-06-19 10:30:59 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:02.257430 | orchestrator | 2025-06-19 10:31:02 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:02.258948 | orchestrator | 2025-06-19 10:31:02 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:02.259025 | orchestrator | 2025-06-19 10:31:02 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:05.301202 | orchestrator | 2025-06-19 10:31:05 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:05.301317 | orchestrator | 2025-06-19 10:31:05 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:05.301334 | orchestrator | 2025-06-19 10:31:05 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:08.341458 | orchestrator | 2025-06-19 10:31:08 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:08.341593 | orchestrator | 2025-06-19 10:31:08 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:08.341612 | orchestrator | 2025-06-19 10:31:08 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:11.373465 | orchestrator | 2025-06-19 10:31:11 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:11.373656 | orchestrator | 2025-06-19 10:31:11 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:11.373676 | orchestrator | 2025-06-19 10:31:11 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:14.429044 | orchestrator | 2025-06-19 10:31:14 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:14.431826 | orchestrator | 2025-06-19 10:31:14 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:14.431884 | orchestrator | 2025-06-19 10:31:14 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:17.474625 | orchestrator | 2025-06-19 10:31:17 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:17.480258 | orchestrator | 2025-06-19 10:31:17 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:17.480293 | orchestrator | 2025-06-19 10:31:17 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:20.525221 | orchestrator | 2025-06-19 10:31:20 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:20.526819 | orchestrator | 2025-06-19 10:31:20 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:20.527027 | orchestrator | 2025-06-19 10:31:20 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:23.573189 | orchestrator | 2025-06-19 10:31:23 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:23.573710 | orchestrator | 2025-06-19 10:31:23 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:23.573740 | orchestrator | 2025-06-19 10:31:23 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:26.624160 | orchestrator | 2025-06-19 10:31:26 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:26.626128 | orchestrator | 2025-06-19 10:31:26 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:26.626268 | orchestrator | 2025-06-19 10:31:26 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:29.671521 | orchestrator | 2025-06-19 10:31:29 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:29.674764 | orchestrator | 2025-06-19 10:31:29 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:29.674834 | orchestrator | 2025-06-19 10:31:29 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:32.716761 | orchestrator | 2025-06-19 10:31:32 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:32.717144 | orchestrator | 2025-06-19 10:31:32 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:32.717204 | orchestrator | 2025-06-19 10:31:32 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:35.757319 | orchestrator | 2025-06-19 10:31:35 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:35.759865 | orchestrator | 2025-06-19 10:31:35 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:35.759906 | orchestrator | 2025-06-19 10:31:35 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:38.805268 | orchestrator | 2025-06-19 10:31:38 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:38.806268 | orchestrator | 2025-06-19 10:31:38 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:38.806512 | orchestrator | 2025-06-19 10:31:38 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:41.851974 | orchestrator | 2025-06-19 10:31:41 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:41.852525 | orchestrator | 2025-06-19 10:31:41 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:41.852556 | orchestrator | 2025-06-19 10:31:41 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:44.900740 | orchestrator | 2025-06-19 10:31:44 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:44.900850 | orchestrator | 2025-06-19 10:31:44 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:44.900859 | orchestrator | 2025-06-19 10:31:44 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:47.943724 | orchestrator | 2025-06-19 10:31:47 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:47.943886 | orchestrator | 2025-06-19 10:31:47 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:47.943902 | orchestrator | 2025-06-19 10:31:47 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:50.986728 | orchestrator | 2025-06-19 10:31:50 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:50.988487 | orchestrator | 2025-06-19 10:31:50 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:50.988550 | orchestrator | 2025-06-19 10:31:50 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:54.036351 | orchestrator | 2025-06-19 10:31:54 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:54.038413 | orchestrator | 2025-06-19 10:31:54 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:54.038454 | orchestrator | 2025-06-19 10:31:54 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:31:57.082421 | orchestrator | 2025-06-19 10:31:57 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:31:57.083634 | orchestrator | 2025-06-19 10:31:57 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:31:57.083954 | orchestrator | 2025-06-19 10:31:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:00.123481 | orchestrator | 2025-06-19 10:32:00 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state STARTED 2025-06-19 10:32:00.123590 | orchestrator | 2025-06-19 10:32:00 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:00.123606 | orchestrator | 2025-06-19 10:32:00 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:03.202419 | orchestrator | 2025-06-19 10:32:03 | INFO  | Task afb5edd4-be04-4edc-a8c0-d10557f87bd2 is in state SUCCESS 2025-06-19 10:32:03.204723 | orchestrator | 2025-06-19 10:32:03.204846 | orchestrator | 2025-06-19 10:32:03.204866 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:32:03.204913 | orchestrator | 2025-06-19 10:32:03.204933 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:32:03.204945 | orchestrator | Thursday 19 June 2025 10:25:43 +0000 (0:00:00.432) 0:00:00.432 ********* 2025-06-19 10:32:03.204956 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.204968 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.204979 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.205015 | orchestrator | 2025-06-19 10:32:03.205027 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:32:03.205072 | orchestrator | Thursday 19 June 2025 10:25:44 +0000 (0:00:00.549) 0:00:00.981 ********* 2025-06-19 10:32:03.205152 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-19 10:32:03.205165 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-19 10:32:03.205181 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-19 10:32:03.205261 | orchestrator | 2025-06-19 10:32:03.205277 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-19 10:32:03.205295 | orchestrator | 2025-06-19 10:32:03.205307 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-19 10:32:03.205319 | orchestrator | Thursday 19 June 2025 10:25:45 +0000 (0:00:00.736) 0:00:01.718 ********* 2025-06-19 10:32:03.205333 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.205345 | orchestrator | 2025-06-19 10:32:03.205358 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-19 10:32:03.205371 | orchestrator | Thursday 19 June 2025 10:25:46 +0000 (0:00:00.846) 0:00:02.564 ********* 2025-06-19 10:32:03.205383 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.205396 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.205408 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.205421 | orchestrator | 2025-06-19 10:32:03.205433 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-19 10:32:03.205446 | orchestrator | Thursday 19 June 2025 10:25:46 +0000 (0:00:00.846) 0:00:03.410 ********* 2025-06-19 10:32:03.205459 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.205471 | orchestrator | 2025-06-19 10:32:03.205534 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-19 10:32:03.205661 | orchestrator | Thursday 19 June 2025 10:25:47 +0000 (0:00:00.902) 0:00:04.312 ********* 2025-06-19 10:32:03.205683 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.205702 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.205720 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.205761 | orchestrator | 2025-06-19 10:32:03.205779 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-19 10:32:03.205795 | orchestrator | Thursday 19 June 2025 10:25:48 +0000 (0:00:00.748) 0:00:05.061 ********* 2025-06-19 10:32:03.205812 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-19 10:32:03.205831 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-19 10:32:03.205848 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-19 10:32:03.205867 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-19 10:32:03.205886 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-19 10:32:03.205951 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-19 10:32:03.205968 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-19 10:32:03.206308 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-19 10:32:03.206340 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-19 10:32:03.206351 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-19 10:32:03.206420 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-19 10:32:03.206435 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-19 10:32:03.206453 | orchestrator | 2025-06-19 10:32:03.206468 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-19 10:32:03.206561 | orchestrator | Thursday 19 June 2025 10:25:53 +0000 (0:00:04.559) 0:00:09.620 ********* 2025-06-19 10:32:03.206599 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-19 10:32:03.206612 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-19 10:32:03.206623 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-19 10:32:03.206634 | orchestrator | 2025-06-19 10:32:03.206645 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-19 10:32:03.206656 | orchestrator | Thursday 19 June 2025 10:25:54 +0000 (0:00:01.163) 0:00:10.784 ********* 2025-06-19 10:32:03.206666 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-19 10:32:03.206677 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-19 10:32:03.206715 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-19 10:32:03.206774 | orchestrator | 2025-06-19 10:32:03.206901 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-19 10:32:03.206986 | orchestrator | Thursday 19 June 2025 10:25:55 +0000 (0:00:01.548) 0:00:12.332 ********* 2025-06-19 10:32:03.207005 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-19 10:32:03.207023 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.207054 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-19 10:32:03.207066 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.207077 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-19 10:32:03.207087 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.207098 | orchestrator | 2025-06-19 10:32:03.207109 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-19 10:32:03.207120 | orchestrator | Thursday 19 June 2025 10:25:56 +0000 (0:00:00.767) 0:00:13.100 ********* 2025-06-19 10:32:03.207134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.207152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.207164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.207186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.207205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.207225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.207341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.207471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.207491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.207503 | orchestrator | 2025-06-19 10:32:03.207540 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-19 10:32:03.207811 | orchestrator | Thursday 19 June 2025 10:25:58 +0000 (0:00:02.231) 0:00:15.332 ********* 2025-06-19 10:32:03.207832 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.207898 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.207920 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.207935 | orchestrator | 2025-06-19 10:32:03.207957 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-19 10:32:03.207969 | orchestrator | Thursday 19 June 2025 10:25:59 +0000 (0:00:01.076) 0:00:16.408 ********* 2025-06-19 10:32:03.207980 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-19 10:32:03.207991 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-19 10:32:03.208002 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-19 10:32:03.208012 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-19 10:32:03.208031 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-19 10:32:03.208043 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-19 10:32:03.208056 | orchestrator | 2025-06-19 10:32:03.208089 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-19 10:32:03.208133 | orchestrator | Thursday 19 June 2025 10:26:02 +0000 (0:00:02.159) 0:00:18.567 ********* 2025-06-19 10:32:03.208150 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.208165 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.208176 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.208187 | orchestrator | 2025-06-19 10:32:03.208198 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-19 10:32:03.208209 | orchestrator | Thursday 19 June 2025 10:26:04 +0000 (0:00:02.572) 0:00:21.140 ********* 2025-06-19 10:32:03.208220 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.208231 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.208242 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.208252 | orchestrator | 2025-06-19 10:32:03.208263 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-19 10:32:03.208274 | orchestrator | Thursday 19 June 2025 10:26:06 +0000 (0:00:01.736) 0:00:22.876 ********* 2025-06-19 10:32:03.208292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.208315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.208327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.208350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-19 10:32:03.208362 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.208374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.208386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.208397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.208413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-19 10:32:03.208425 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.208514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.208872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.208887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.208898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-19 10:32:03.208909 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.208920 | orchestrator | 2025-06-19 10:32:03.208949 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-19 10:32:03.208961 | orchestrator | Thursday 19 June 2025 10:26:07 +0000 (0:00:00.777) 0:00:23.654 ********* 2025-06-19 10:32:03.208973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.209112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-19 10:32:03.209124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.209147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-19 10:32:03.209233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.209345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa', '__omit_place_holder__b9837274ec763b362413b214ceb12364fdc3dffa'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-19 10:32:03.209387 | orchestrator | 2025-06-19 10:32:03.209403 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-19 10:32:03.209421 | orchestrator | Thursday 19 June 2025 10:26:11 +0000 (0:00:03.916) 0:00:27.570 ********* 2025-06-19 10:32:03.209438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.209712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.209723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.209760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.209771 | orchestrator | 2025-06-19 10:32:03.209782 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-19 10:32:03.209793 | orchestrator | Thursday 19 June 2025 10:26:15 +0000 (0:00:04.555) 0:00:32.125 ********* 2025-06-19 10:32:03.209805 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-19 10:32:03.209822 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-19 10:32:03.209840 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-19 10:32:03.209850 | orchestrator | 2025-06-19 10:32:03.209861 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-19 10:32:03.209877 | orchestrator | Thursday 19 June 2025 10:26:18 +0000 (0:00:03.191) 0:00:35.317 ********* 2025-06-19 10:32:03.209895 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-19 10:32:03.209913 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-19 10:32:03.209931 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-19 10:32:03.209949 | orchestrator | 2025-06-19 10:32:03.209982 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-19 10:32:03.210001 | orchestrator | Thursday 19 June 2025 10:26:25 +0000 (0:00:06.196) 0:00:41.514 ********* 2025-06-19 10:32:03.210080 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.210096 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.210107 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.210118 | orchestrator | 2025-06-19 10:32:03.210128 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-19 10:32:03.210139 | orchestrator | Thursday 19 June 2025 10:26:26 +0000 (0:00:01.304) 0:00:42.818 ********* 2025-06-19 10:32:03.210150 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-19 10:32:03.210161 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-19 10:32:03.210172 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-19 10:32:03.210183 | orchestrator | 2025-06-19 10:32:03.210194 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-19 10:32:03.210204 | orchestrator | Thursday 19 June 2025 10:26:30 +0000 (0:00:03.706) 0:00:46.524 ********* 2025-06-19 10:32:03.210215 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-19 10:32:03.210226 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-19 10:32:03.210237 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-19 10:32:03.210247 | orchestrator | 2025-06-19 10:32:03.210258 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-19 10:32:03.210269 | orchestrator | Thursday 19 June 2025 10:26:34 +0000 (0:00:04.840) 0:00:51.365 ********* 2025-06-19 10:32:03.210280 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-19 10:32:03.210408 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-19 10:32:03.210420 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-19 10:32:03.210430 | orchestrator | 2025-06-19 10:32:03.210441 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-19 10:32:03.210452 | orchestrator | Thursday 19 June 2025 10:26:36 +0000 (0:00:01.977) 0:00:53.343 ********* 2025-06-19 10:32:03.210463 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-19 10:32:03.210474 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-19 10:32:03.210508 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-19 10:32:03.210519 | orchestrator | 2025-06-19 10:32:03.210531 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-19 10:32:03.210541 | orchestrator | Thursday 19 June 2025 10:26:38 +0000 (0:00:01.602) 0:00:54.946 ********* 2025-06-19 10:32:03.210598 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.210620 | orchestrator | 2025-06-19 10:32:03.210631 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-19 10:32:03.210642 | orchestrator | Thursday 19 June 2025 10:26:39 +0000 (0:00:00.854) 0:00:55.800 ********* 2025-06-19 10:32:03.210654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.210672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.210693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.210705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.210717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.210755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.210775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.210787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.210803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.210818 | orchestrator | 2025-06-19 10:32:03.210836 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-19 10:32:03.210848 | orchestrator | Thursday 19 June 2025 10:26:42 +0000 (0:00:03.557) 0:00:59.358 ********* 2025-06-19 10:32:03.210868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.210880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.210892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.210903 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.210914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.210932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.210949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.210960 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.210972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.210990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211013 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.211024 | orchestrator | 2025-06-19 10:32:03.211034 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-19 10:32:03.211045 | orchestrator | Thursday 19 June 2025 10:26:43 +0000 (0:00:00.693) 0:01:00.052 ********* 2025-06-19 10:32:03.211056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211097 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.211113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211156 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.211175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211241 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.211252 | orchestrator | 2025-06-19 10:32:03.211263 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-19 10:32:03.211274 | orchestrator | Thursday 19 June 2025 10:26:44 +0000 (0:00:01.208) 0:01:01.261 ********* 2025-06-19 10:32:03.211290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211356 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.211408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211495 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.211515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211551 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.211563 | orchestrator | 2025-06-19 10:32:03.211583 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-19 10:32:03.211594 | orchestrator | Thursday 19 June 2025 10:26:45 +0000 (0:00:00.730) 0:01:01.991 ********* 2025-06-19 10:32:03.211606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211669 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.211686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211724 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.211811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211835 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.211846 | orchestrator | 2025-06-19 10:32:03.211857 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-19 10:32:03.211867 | orchestrator | Thursday 19 June 2025 10:26:46 +0000 (0:00:00.749) 0:01:02.741 ********* 2025-06-19 10:32:03.211879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.211919 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.211938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.211969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.211991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212014 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.212025 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.212035 | orchestrator | 2025-06-19 10:32:03.212046 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-19 10:32:03.212057 | orchestrator | Thursday 19 June 2025 10:26:47 +0000 (0:00:00.965) 0:01:03.706 ********* 2025-06-19 10:32:03.212073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.212097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.212109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212121 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.212132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.212143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.212154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212166 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.212181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.212198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.212217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212229 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.212239 | orchestrator | 2025-06-19 10:32:03.212250 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-19 10:32:03.212261 | orchestrator | Thursday 19 June 2025 10:26:47 +0000 (0:00:00.608) 0:01:04.315 ********* 2025-06-19 10:32:03.212272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.212283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.212295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212305 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.212314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.212334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.212352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212363 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.212372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.212382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.212393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212402 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.212412 | orchestrator | 2025-06-19 10:32:03.212421 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-19 10:32:03.212431 | orchestrator | Thursday 19 June 2025 10:26:48 +0000 (0:00:00.718) 0:01:05.034 ********* 2025-06-19 10:32:03.212441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.212460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.212471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212481 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.212496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.212507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.212517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212527 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.212536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-19 10:32:03.212546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-19 10:32:03.212570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-19 10:32:03.212581 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.212590 | orchestrator | 2025-06-19 10:32:03.212600 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-19 10:32:03.212610 | orchestrator | Thursday 19 June 2025 10:26:49 +0000 (0:00:00.935) 0:01:05.970 ********* 2025-06-19 10:32:03.212619 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-19 10:32:03.212629 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-19 10:32:03.212644 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-19 10:32:03.212655 | orchestrator | 2025-06-19 10:32:03.212664 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-19 10:32:03.212674 | orchestrator | Thursday 19 June 2025 10:26:51 +0000 (0:00:01.753) 0:01:07.723 ********* 2025-06-19 10:32:03.212683 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-19 10:32:03.212693 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-19 10:32:03.212703 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-19 10:32:03.212712 | orchestrator | 2025-06-19 10:32:03.212721 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-19 10:32:03.212792 | orchestrator | Thursday 19 June 2025 10:26:52 +0000 (0:00:01.677) 0:01:09.400 ********* 2025-06-19 10:32:03.212803 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-19 10:32:03.212813 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-19 10:32:03.212823 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-19 10:32:03.212832 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-19 10:32:03.212842 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.212851 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-19 10:32:03.212861 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.212870 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-19 10:32:03.212880 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.212889 | orchestrator | 2025-06-19 10:32:03.212898 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-19 10:32:03.212908 | orchestrator | Thursday 19 June 2025 10:26:53 +0000 (0:00:00.951) 0:01:10.351 ********* 2025-06-19 10:32:03.212918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.212937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.212952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-19 10:32:03.212969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.212980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.212990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-19 10:32:03.213000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.213018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.213028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-19 10:32:03.213038 | orchestrator | 2025-06-19 10:32:03.213048 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-19 10:32:03.213057 | orchestrator | Thursday 19 June 2025 10:26:57 +0000 (0:00:03.540) 0:01:13.891 ********* 2025-06-19 10:32:03.213067 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.213077 | orchestrator | 2025-06-19 10:32:03.213086 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-19 10:32:03.213096 | orchestrator | Thursday 19 June 2025 10:26:58 +0000 (0:00:00.629) 0:01:14.521 ********* 2025-06-19 10:32:03.213111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-19 10:32:03.213130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-19 10:32:03.213141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.213158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.213168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.213178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.213188 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.213211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.213222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-19 10:32:03.213238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.213267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.213278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.213288 | orchestrator | 2025-06-19 10:32:03.213298 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-19 10:32:03.213308 | orchestrator | Thursday 19 June 2025 10:27:03 +0000 (0:00:05.082) 0:01:19.603 ********* 2025-06-19 10:32:03.213322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-19 10:32:03.214092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.214113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214145 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.214156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-19 10:32:03.214168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.214185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214207 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.214227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-19 10:32:03.214245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.214254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214273 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.214282 | orchestrator | 2025-06-19 10:32:03.214292 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-19 10:32:03.214301 | orchestrator | Thursday 19 June 2025 10:27:03 +0000 (0:00:00.642) 0:01:20.246 ********* 2025-06-19 10:32:03.214310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-19 10:32:03.214321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-19 10:32:03.214331 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.214341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-19 10:32:03.214353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-19 10:32:03.214361 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.214369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-19 10:32:03.214377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-19 10:32:03.214385 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.214394 | orchestrator | 2025-06-19 10:32:03.214406 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-19 10:32:03.214421 | orchestrator | Thursday 19 June 2025 10:27:04 +0000 (0:00:00.930) 0:01:21.177 ********* 2025-06-19 10:32:03.214429 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.214437 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.214445 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.214452 | orchestrator | 2025-06-19 10:32:03.214460 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-19 10:32:03.214468 | orchestrator | Thursday 19 June 2025 10:27:06 +0000 (0:00:01.508) 0:01:22.686 ********* 2025-06-19 10:32:03.214476 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.214483 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.214491 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.214499 | orchestrator | 2025-06-19 10:32:03.214512 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-19 10:32:03.214522 | orchestrator | Thursday 19 June 2025 10:27:07 +0000 (0:00:01.733) 0:01:24.419 ********* 2025-06-19 10:32:03.214530 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.214538 | orchestrator | 2025-06-19 10:32:03.214546 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-19 10:32:03.214554 | orchestrator | Thursday 19 June 2025 10:27:08 +0000 (0:00:00.680) 0:01:25.099 ********* 2025-06-19 10:32:03.214563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.214572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.214638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.214655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214688 | orchestrator | 2025-06-19 10:32:03.214701 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-19 10:32:03.214715 | orchestrator | Thursday 19 June 2025 10:27:12 +0000 (0:00:03.381) 0:01:28.481 ********* 2025-06-19 10:32:03.214748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.214758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214774 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.214782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.214791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214817 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.214830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.214839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.214855 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.214863 | orchestrator | 2025-06-19 10:32:03.214871 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-19 10:32:03.214879 | orchestrator | Thursday 19 June 2025 10:27:12 +0000 (0:00:00.670) 0:01:29.152 ********* 2025-06-19 10:32:03.214887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-19 10:32:03.214895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-19 10:32:03.214904 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.214911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-19 10:32:03.214924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-19 10:32:03.214932 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.214940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-19 10:32:03.214973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-19 10:32:03.214982 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.214990 | orchestrator | 2025-06-19 10:32:03.214997 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-19 10:32:03.215005 | orchestrator | Thursday 19 June 2025 10:27:13 +0000 (0:00:00.930) 0:01:30.082 ********* 2025-06-19 10:32:03.215013 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.215021 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.215028 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.215036 | orchestrator | 2025-06-19 10:32:03.215044 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-19 10:32:03.215051 | orchestrator | Thursday 19 June 2025 10:27:14 +0000 (0:00:01.246) 0:01:31.329 ********* 2025-06-19 10:32:03.215059 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.215067 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.215074 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.215082 | orchestrator | 2025-06-19 10:32:03.215094 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-19 10:32:03.215102 | orchestrator | Thursday 19 June 2025 10:27:16 +0000 (0:00:01.796) 0:01:33.126 ********* 2025-06-19 10:32:03.215110 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.215118 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.215125 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.215133 | orchestrator | 2025-06-19 10:32:03.215141 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-19 10:32:03.215149 | orchestrator | Thursday 19 June 2025 10:27:16 +0000 (0:00:00.260) 0:01:33.386 ********* 2025-06-19 10:32:03.215156 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.215164 | orchestrator | 2025-06-19 10:32:03.215172 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-19 10:32:03.215180 | orchestrator | Thursday 19 June 2025 10:27:17 +0000 (0:00:00.818) 0:01:34.204 ********* 2025-06-19 10:32:03.215188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-19 10:32:03.215197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-19 10:32:03.215211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-19 10:32:03.215219 | orchestrator | 2025-06-19 10:32:03.215227 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-19 10:32:03.215235 | orchestrator | Thursday 19 June 2025 10:27:20 +0000 (0:00:02.670) 0:01:36.875 ********* 2025-06-19 10:32:03.215252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-19 10:32:03.215261 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.215269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-19 10:32:03.215277 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.215285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-19 10:32:03.215298 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.215306 | orchestrator | 2025-06-19 10:32:03.215314 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-19 10:32:03.215322 | orchestrator | Thursday 19 June 2025 10:27:21 +0000 (0:00:01.529) 0:01:38.405 ********* 2025-06-19 10:32:03.215330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-19 10:32:03.215340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-19 10:32:03.215349 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.215357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-19 10:32:03.215369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-19 10:32:03.215377 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.215390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-19 10:32:03.215398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-19 10:32:03.215406 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.215414 | orchestrator | 2025-06-19 10:32:03.215422 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-19 10:32:03.215430 | orchestrator | Thursday 19 June 2025 10:27:23 +0000 (0:00:01.728) 0:01:40.133 ********* 2025-06-19 10:32:03.215437 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.215445 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.215453 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.215461 | orchestrator | 2025-06-19 10:32:03.215469 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-19 10:32:03.215481 | orchestrator | Thursday 19 June 2025 10:27:24 +0000 (0:00:00.374) 0:01:40.508 ********* 2025-06-19 10:32:03.215489 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.215497 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.215505 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.215512 | orchestrator | 2025-06-19 10:32:03.215520 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-19 10:32:03.215528 | orchestrator | Thursday 19 June 2025 10:27:25 +0000 (0:00:01.069) 0:01:41.577 ********* 2025-06-19 10:32:03.215536 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.215544 | orchestrator | 2025-06-19 10:32:03.215551 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-19 10:32:03.215559 | orchestrator | Thursday 19 June 2025 10:27:26 +0000 (0:00:00.942) 0:01:42.520 ********* 2025-06-19 10:32:03.215567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.215576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.215625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.215667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215699 | orchestrator | 2025-06-19 10:32:03.215707 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-19 10:32:03.215715 | orchestrator | Thursday 19 June 2025 10:27:29 +0000 (0:00:03.792) 0:01:46.312 ********* 2025-06-19 10:32:03.215746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.215757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215792 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.215800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.215808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215846 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.215854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.215863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.215890 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.215898 | orchestrator | 2025-06-19 10:32:03.215906 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-19 10:32:03.215914 | orchestrator | Thursday 19 June 2025 10:27:30 +0000 (0:00:00.928) 0:01:47.241 ********* 2025-06-19 10:32:03.215927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-19 10:32:03.215939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-19 10:32:03.215947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-19 10:32:03.215955 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.215963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-19 10:32:03.215971 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.215979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-19 10:32:03.215987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-19 10:32:03.215995 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.216003 | orchestrator | 2025-06-19 10:32:03.216010 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-19 10:32:03.216018 | orchestrator | Thursday 19 June 2025 10:27:31 +0000 (0:00:01.120) 0:01:48.361 ********* 2025-06-19 10:32:03.216026 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.216034 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.216041 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.216049 | orchestrator | 2025-06-19 10:32:03.216057 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-19 10:32:03.216064 | orchestrator | Thursday 19 June 2025 10:27:33 +0000 (0:00:01.442) 0:01:49.804 ********* 2025-06-19 10:32:03.216072 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.216080 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.216088 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.216095 | orchestrator | 2025-06-19 10:32:03.216103 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-19 10:32:03.216111 | orchestrator | Thursday 19 June 2025 10:27:35 +0000 (0:00:02.004) 0:01:51.809 ********* 2025-06-19 10:32:03.216119 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.216126 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.216134 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.216142 | orchestrator | 2025-06-19 10:32:03.216149 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-19 10:32:03.216157 | orchestrator | Thursday 19 June 2025 10:27:35 +0000 (0:00:00.337) 0:01:52.147 ********* 2025-06-19 10:32:03.216165 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.216172 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.216180 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.216188 | orchestrator | 2025-06-19 10:32:03.216195 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-19 10:32:03.216203 | orchestrator | Thursday 19 June 2025 10:27:36 +0000 (0:00:00.340) 0:01:52.488 ********* 2025-06-19 10:32:03.216211 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.216218 | orchestrator | 2025-06-19 10:32:03.216226 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-19 10:32:03.216234 | orchestrator | Thursday 19 June 2025 10:27:37 +0000 (0:00:00.971) 0:01:53.460 ********* 2025-06-19 10:32:03.216250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:32:03.216263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:32:03.216271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:32:03.216334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:32:03.216343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:32:03.216383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:32:03.216395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216461 | orchestrator | 2025-06-19 10:32:03.216469 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-19 10:32:03.216477 | orchestrator | Thursday 19 June 2025 10:27:40 +0000 (0:00:03.701) 0:01:57.161 ********* 2025-06-19 10:32:03.216490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:32:03.216499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:32:03.216507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216563 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.216571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:32:03.216579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:32:03.216592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:32:03.216799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:32:03.216829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216845 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.216854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.216911 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.216919 | orchestrator | 2025-06-19 10:32:03.216927 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-19 10:32:03.216940 | orchestrator | Thursday 19 June 2025 10:27:41 +0000 (0:00:00.814) 0:01:57.976 ********* 2025-06-19 10:32:03.216948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-19 10:32:03.216957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-19 10:32:03.216965 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.216973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-19 10:32:03.216981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-19 10:32:03.216989 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.216997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-19 10:32:03.217005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-19 10:32:03.217013 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.217021 | orchestrator | 2025-06-19 10:32:03.217028 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-19 10:32:03.217036 | orchestrator | Thursday 19 June 2025 10:27:42 +0000 (0:00:01.372) 0:01:59.349 ********* 2025-06-19 10:32:03.217044 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.217052 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.217060 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.217067 | orchestrator | 2025-06-19 10:32:03.217075 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-19 10:32:03.217083 | orchestrator | Thursday 19 June 2025 10:27:44 +0000 (0:00:01.294) 0:02:00.643 ********* 2025-06-19 10:32:03.217091 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.217099 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.217106 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.217114 | orchestrator | 2025-06-19 10:32:03.217122 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-19 10:32:03.217130 | orchestrator | Thursday 19 June 2025 10:27:46 +0000 (0:00:02.072) 0:02:02.716 ********* 2025-06-19 10:32:03.217137 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.217145 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.217153 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.217160 | orchestrator | 2025-06-19 10:32:03.217172 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-19 10:32:03.217180 | orchestrator | Thursday 19 June 2025 10:27:46 +0000 (0:00:00.292) 0:02:03.009 ********* 2025-06-19 10:32:03.217188 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.217196 | orchestrator | 2025-06-19 10:32:03.217203 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-19 10:32:03.217211 | orchestrator | Thursday 19 June 2025 10:27:47 +0000 (0:00:00.964) 0:02:03.973 ********* 2025-06-19 10:32:03.217226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:32:03.217242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.217260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:32:03.217278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.217296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:32:03.217311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.217320 | orchestrator | 2025-06-19 10:32:03.217328 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-19 10:32:03.217336 | orchestrator | Thursday 19 June 2025 10:27:51 +0000 (0:00:04.169) 0:02:08.143 ********* 2025-06-19 10:32:03.217353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-19 10:32:03.217368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.217377 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.217392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-19 10:32:03.217407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.217420 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.217429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-19 10:32:03.217446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.217461 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.217469 | orchestrator | 2025-06-19 10:32:03.217477 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-19 10:32:03.217485 | orchestrator | Thursday 19 June 2025 10:27:54 +0000 (0:00:03.156) 0:02:11.300 ********* 2025-06-19 10:32:03.217493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-19 10:32:03.217502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-19 10:32:03.217510 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.217518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-19 10:32:03.217526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-19 10:32:03.217538 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.217546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-19 10:32:03.217564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-19 10:32:03.217573 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.217581 | orchestrator | 2025-06-19 10:32:03.217588 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-19 10:32:03.217596 | orchestrator | Thursday 19 June 2025 10:27:58 +0000 (0:00:03.894) 0:02:15.194 ********* 2025-06-19 10:32:03.217604 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.217612 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.217620 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.217627 | orchestrator | 2025-06-19 10:32:03.217635 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-19 10:32:03.217643 | orchestrator | Thursday 19 June 2025 10:28:00 +0000 (0:00:01.386) 0:02:16.580 ********* 2025-06-19 10:32:03.217651 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.217659 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.217666 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.217674 | orchestrator | 2025-06-19 10:32:03.217682 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-19 10:32:03.217690 | orchestrator | Thursday 19 June 2025 10:28:02 +0000 (0:00:01.993) 0:02:18.574 ********* 2025-06-19 10:32:03.217698 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.217705 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.217713 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.217721 | orchestrator | 2025-06-19 10:32:03.217781 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-19 10:32:03.217790 | orchestrator | Thursday 19 June 2025 10:28:02 +0000 (0:00:00.308) 0:02:18.883 ********* 2025-06-19 10:32:03.217798 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.217806 | orchestrator | 2025-06-19 10:32:03.217814 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-19 10:32:03.217822 | orchestrator | Thursday 19 June 2025 10:28:03 +0000 (0:00:01.034) 0:02:19.917 ********* 2025-06-19 10:32:03.217830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:32:03.217840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:32:03.217857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:32:03.217866 | orchestrator | 2025-06-19 10:32:03.217874 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-19 10:32:03.217882 | orchestrator | Thursday 19 June 2025 10:28:06 +0000 (0:00:03.282) 0:02:23.200 ********* 2025-06-19 10:32:03.217896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-19 10:32:03.217904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-19 10:32:03.217913 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.217920 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.217929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-19 10:32:03.217937 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.217944 | orchestrator | 2025-06-19 10:32:03.217952 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-19 10:32:03.217960 | orchestrator | Thursday 19 June 2025 10:28:07 +0000 (0:00:00.393) 0:02:23.593 ********* 2025-06-19 10:32:03.217968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-19 10:32:03.217977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-19 10:32:03.217990 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.217998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-19 10:32:03.218006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-19 10:32:03.218013 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.218069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-19 10:32:03.218084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-19 10:32:03.218098 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.218112 | orchestrator | 2025-06-19 10:32:03.218131 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-19 10:32:03.218144 | orchestrator | Thursday 19 June 2025 10:28:08 +0000 (0:00:00.850) 0:02:24.444 ********* 2025-06-19 10:32:03.218158 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.218172 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.218186 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.218200 | orchestrator | 2025-06-19 10:32:03.218213 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-19 10:32:03.218225 | orchestrator | Thursday 19 June 2025 10:28:09 +0000 (0:00:01.355) 0:02:25.799 ********* 2025-06-19 10:32:03.218235 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.218242 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.218249 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.218255 | orchestrator | 2025-06-19 10:32:03.218262 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-19 10:32:03.218268 | orchestrator | Thursday 19 June 2025 10:28:11 +0000 (0:00:02.062) 0:02:27.862 ********* 2025-06-19 10:32:03.218275 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.218281 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.218293 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.218299 | orchestrator | 2025-06-19 10:32:03.218306 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-19 10:32:03.218313 | orchestrator | Thursday 19 June 2025 10:28:11 +0000 (0:00:00.314) 0:02:28.176 ********* 2025-06-19 10:32:03.218319 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.218326 | orchestrator | 2025-06-19 10:32:03.218332 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-19 10:32:03.218339 | orchestrator | Thursday 19 June 2025 10:28:12 +0000 (0:00:01.039) 0:02:29.216 ********* 2025-06-19 10:32:03.218347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:32:03.218370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:32:03.218379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:32:03.218391 | orchestrator | 2025-06-19 10:32:03.218398 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-19 10:32:03.218408 | orchestrator | Thursday 19 June 2025 10:28:16 +0000 (0:00:03.875) 0:02:33.091 ********* 2025-06-19 10:32:03.218428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-19 10:32:03.218442 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.218449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-19 10:32:03.218456 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.218474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-19 10:32:03.218486 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.218494 | orchestrator | 2025-06-19 10:32:03.218506 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-19 10:32:03.218513 | orchestrator | Thursday 19 June 2025 10:28:17 +0000 (0:00:01.005) 0:02:34.097 ********* 2025-06-19 10:32:03.218520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-19 10:32:03.218528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-19 10:32:03.218535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-19 10:32:03.218543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-19 10:32:03.218550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-19 10:32:03.218557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-19 10:32:03.218573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-19 10:32:03.218580 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.218587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-19 10:32:03.218605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-19 10:32:03.218612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-19 10:32:03.218623 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.218630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-19 10:32:03.218637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-19 10:32:03.218644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-19 10:32:03.218651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-19 10:32:03.218658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-19 10:32:03.218664 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.218671 | orchestrator | 2025-06-19 10:32:03.218678 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-19 10:32:03.218685 | orchestrator | Thursday 19 June 2025 10:28:18 +0000 (0:00:01.308) 0:02:35.405 ********* 2025-06-19 10:32:03.218691 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.218698 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.218705 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.218711 | orchestrator | 2025-06-19 10:32:03.218718 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-19 10:32:03.218724 | orchestrator | Thursday 19 June 2025 10:28:20 +0000 (0:00:01.207) 0:02:36.613 ********* 2025-06-19 10:32:03.218748 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.218755 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.218761 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.218768 | orchestrator | 2025-06-19 10:32:03.218774 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-19 10:32:03.218781 | orchestrator | Thursday 19 June 2025 10:28:22 +0000 (0:00:01.868) 0:02:38.482 ********* 2025-06-19 10:32:03.218788 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.218817 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.218824 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.218831 | orchestrator | 2025-06-19 10:32:03.218837 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-19 10:32:03.218844 | orchestrator | Thursday 19 June 2025 10:28:22 +0000 (0:00:00.352) 0:02:38.834 ********* 2025-06-19 10:32:03.218850 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.218857 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.218864 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.218870 | orchestrator | 2025-06-19 10:32:03.218877 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-19 10:32:03.218884 | orchestrator | Thursday 19 June 2025 10:28:22 +0000 (0:00:00.300) 0:02:39.135 ********* 2025-06-19 10:32:03.218894 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.218901 | orchestrator | 2025-06-19 10:32:03.218907 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-19 10:32:03.218919 | orchestrator | Thursday 19 June 2025 10:28:23 +0000 (0:00:01.175) 0:02:40.311 ********* 2025-06-19 10:32:03.218931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:32:03.218939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:32:03.218947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:32:03.218955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:32:03.218962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:32:03.218978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:32:03.218990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:32:03.218998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:32:03.219005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:32:03.219012 | orchestrator | 2025-06-19 10:32:03.219019 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-19 10:32:03.219026 | orchestrator | Thursday 19 June 2025 10:28:27 +0000 (0:00:03.494) 0:02:43.805 ********* 2025-06-19 10:32:03.219033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-19 10:32:03.219048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:32:03.219060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:32:03.219067 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.219074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-19 10:32:03.219082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:32:03.219089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:32:03.219096 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.219106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-19 10:32:03.219122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:32:03.219130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:32:03.219137 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.219143 | orchestrator | 2025-06-19 10:32:03.219150 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-19 10:32:03.219157 | orchestrator | Thursday 19 June 2025 10:28:27 +0000 (0:00:00.576) 0:02:44.381 ********* 2025-06-19 10:32:03.219164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-19 10:32:03.219172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-19 10:32:03.219178 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.219185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-19 10:32:03.219192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-19 10:32:03.219199 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.219205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-19 10:32:03.219216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-19 10:32:03.219223 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.219230 | orchestrator | 2025-06-19 10:32:03.219237 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-19 10:32:03.219243 | orchestrator | Thursday 19 June 2025 10:28:29 +0000 (0:00:01.103) 0:02:45.485 ********* 2025-06-19 10:32:03.219250 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.219256 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.219263 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.219269 | orchestrator | 2025-06-19 10:32:03.219276 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-19 10:32:03.219283 | orchestrator | Thursday 19 June 2025 10:28:30 +0000 (0:00:01.242) 0:02:46.728 ********* 2025-06-19 10:32:03.219289 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.219296 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.219302 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.219309 | orchestrator | 2025-06-19 10:32:03.219318 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-19 10:32:03.219325 | orchestrator | Thursday 19 June 2025 10:28:32 +0000 (0:00:01.960) 0:02:48.688 ********* 2025-06-19 10:32:03.219332 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.219338 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.219345 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.219351 | orchestrator | 2025-06-19 10:32:03.219358 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-19 10:32:03.219365 | orchestrator | Thursday 19 June 2025 10:28:32 +0000 (0:00:00.316) 0:02:49.004 ********* 2025-06-19 10:32:03.219371 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.219378 | orchestrator | 2025-06-19 10:32:03.219385 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-19 10:32:03.219391 | orchestrator | Thursday 19 June 2025 10:28:33 +0000 (0:00:01.175) 0:02:50.180 ********* 2025-06-19 10:32:03.219403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:32:03.219411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:32:03.219433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:32:03.219457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219465 | orchestrator | 2025-06-19 10:32:03.219472 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-19 10:32:03.219478 | orchestrator | Thursday 19 June 2025 10:28:37 +0000 (0:00:03.472) 0:02:53.653 ********* 2025-06-19 10:32:03.219485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:32:03.219497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219504 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.219514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:32:03.219525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219532 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.219539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:32:03.219546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219557 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.219564 | orchestrator | 2025-06-19 10:32:03.219571 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-19 10:32:03.219577 | orchestrator | Thursday 19 June 2025 10:28:37 +0000 (0:00:00.611) 0:02:54.264 ********* 2025-06-19 10:32:03.219584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-19 10:32:03.219591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-19 10:32:03.219598 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.219605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-19 10:32:03.219612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-19 10:32:03.219619 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.219625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-19 10:32:03.219632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-19 10:32:03.219639 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.219645 | orchestrator | 2025-06-19 10:32:03.219655 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-19 10:32:03.219662 | orchestrator | Thursday 19 June 2025 10:28:39 +0000 (0:00:01.227) 0:02:55.492 ********* 2025-06-19 10:32:03.219669 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.219675 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.219682 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.219688 | orchestrator | 2025-06-19 10:32:03.219695 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-19 10:32:03.219702 | orchestrator | Thursday 19 June 2025 10:28:40 +0000 (0:00:01.258) 0:02:56.750 ********* 2025-06-19 10:32:03.219708 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.219715 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.219722 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.219739 | orchestrator | 2025-06-19 10:32:03.219746 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-19 10:32:03.219753 | orchestrator | Thursday 19 June 2025 10:28:42 +0000 (0:00:01.902) 0:02:58.652 ********* 2025-06-19 10:32:03.219763 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.219770 | orchestrator | 2025-06-19 10:32:03.219777 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-19 10:32:03.219784 | orchestrator | Thursday 19 June 2025 10:28:43 +0000 (0:00:01.264) 0:02:59.917 ********* 2025-06-19 10:32:03.219791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-19 10:32:03.219803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-19 10:32:03.219810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-19 10:32:03.219872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219901 | orchestrator | 2025-06-19 10:32:03.219908 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-19 10:32:03.219915 | orchestrator | Thursday 19 June 2025 10:28:46 +0000 (0:00:03.402) 0:03:03.320 ********* 2025-06-19 10:32:03.219921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-19 10:32:03.219928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219963 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.219974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-19 10:32:03.219989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.219996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.220003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.220010 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.220017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-19 10:32:03.220024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.220034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.220049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.220056 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.220063 | orchestrator | 2025-06-19 10:32:03.220070 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-19 10:32:03.220077 | orchestrator | Thursday 19 June 2025 10:28:47 +0000 (0:00:00.662) 0:03:03.983 ********* 2025-06-19 10:32:03.220084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-19 10:32:03.220091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-19 10:32:03.220097 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.220104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-19 10:32:03.220111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-19 10:32:03.220117 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.220124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-19 10:32:03.220131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-19 10:32:03.220138 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.220144 | orchestrator | 2025-06-19 10:32:03.220151 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-19 10:32:03.220158 | orchestrator | Thursday 19 June 2025 10:28:48 +0000 (0:00:01.088) 0:03:05.071 ********* 2025-06-19 10:32:03.220165 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.220171 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.220178 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.220185 | orchestrator | 2025-06-19 10:32:03.220191 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-19 10:32:03.220198 | orchestrator | Thursday 19 June 2025 10:28:49 +0000 (0:00:01.251) 0:03:06.322 ********* 2025-06-19 10:32:03.220205 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.220211 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.220218 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.220225 | orchestrator | 2025-06-19 10:32:03.220231 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-19 10:32:03.220238 | orchestrator | Thursday 19 June 2025 10:28:51 +0000 (0:00:01.934) 0:03:08.256 ********* 2025-06-19 10:32:03.220245 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.220251 | orchestrator | 2025-06-19 10:32:03.220258 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-19 10:32:03.220265 | orchestrator | Thursday 19 June 2025 10:28:52 +0000 (0:00:01.058) 0:03:09.314 ********* 2025-06-19 10:32:03.220275 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-19 10:32:03.220282 | orchestrator | 2025-06-19 10:32:03.220289 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-19 10:32:03.220296 | orchestrator | Thursday 19 June 2025 10:28:55 +0000 (0:00:03.019) 0:03:12.334 ********* 2025-06-19 10:32:03.220313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:32:03.220321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-19 10:32:03.220328 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.220335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:32:03.220356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:32:03.220364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-19 10:32:03.220371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-19 10:32:03.220378 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.220385 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.220391 | orchestrator | 2025-06-19 10:32:03.220398 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-19 10:32:03.220405 | orchestrator | Thursday 19 June 2025 10:28:58 +0000 (0:00:02.141) 0:03:14.475 ********* 2025-06-19 10:32:03.220420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:32:03.220432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-19 10:32:03.220439 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.220446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:32:03.220458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-19 10:32:03.220465 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.220611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:32:03.220624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-19 10:32:03.220631 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.220638 | orchestrator | 2025-06-19 10:32:03.220644 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-19 10:32:03.220651 | orchestrator | Thursday 19 June 2025 10:28:59 +0000 (0:00:01.947) 0:03:16.423 ********* 2025-06-19 10:32:03.220658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-19 10:32:03.220670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-19 10:32:03.220682 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.220690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-19 10:32:03.220701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-19 10:32:03.220708 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.220783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-19 10:32:03.220794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-19 10:32:03.220805 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.220814 | orchestrator | 2025-06-19 10:32:03.220821 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-19 10:32:03.220828 | orchestrator | Thursday 19 June 2025 10:29:02 +0000 (0:00:02.706) 0:03:19.130 ********* 2025-06-19 10:32:03.220835 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.220841 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.220848 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.220860 | orchestrator | 2025-06-19 10:32:03.220866 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-19 10:32:03.220873 | orchestrator | Thursday 19 June 2025 10:29:04 +0000 (0:00:02.227) 0:03:21.357 ********* 2025-06-19 10:32:03.220880 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.220886 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.220893 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.220899 | orchestrator | 2025-06-19 10:32:03.220906 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-19 10:32:03.220913 | orchestrator | Thursday 19 June 2025 10:29:06 +0000 (0:00:01.419) 0:03:22.777 ********* 2025-06-19 10:32:03.220919 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.220926 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.220933 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.220939 | orchestrator | 2025-06-19 10:32:03.220946 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-19 10:32:03.220953 | orchestrator | Thursday 19 June 2025 10:29:06 +0000 (0:00:00.320) 0:03:23.098 ********* 2025-06-19 10:32:03.220959 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.220966 | orchestrator | 2025-06-19 10:32:03.220972 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-19 10:32:03.220979 | orchestrator | Thursday 19 June 2025 10:29:07 +0000 (0:00:01.074) 0:03:24.172 ********* 2025-06-19 10:32:03.220986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-19 10:32:03.220998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-19 10:32:03.221052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-19 10:32:03.221062 | orchestrator | 2025-06-19 10:32:03.221074 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-19 10:32:03.221082 | orchestrator | Thursday 19 June 2025 10:29:09 +0000 (0:00:01.779) 0:03:25.951 ********* 2025-06-19 10:32:03.221094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-19 10:32:03.221101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-19 10:32:03.221126 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.221133 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.221140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-19 10:32:03.221147 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.221153 | orchestrator | 2025-06-19 10:32:03.221160 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-19 10:32:03.221167 | orchestrator | Thursday 19 June 2025 10:29:09 +0000 (0:00:00.389) 0:03:26.340 ********* 2025-06-19 10:32:03.221178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-19 10:32:03.221185 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.221192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-19 10:32:03.221199 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.221252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-19 10:32:03.221262 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.221273 | orchestrator | 2025-06-19 10:32:03.221282 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-19 10:32:03.221294 | orchestrator | Thursday 19 June 2025 10:29:10 +0000 (0:00:00.859) 0:03:27.200 ********* 2025-06-19 10:32:03.221301 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.221307 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.221314 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.221320 | orchestrator | 2025-06-19 10:32:03.221327 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-19 10:32:03.221333 | orchestrator | Thursday 19 June 2025 10:29:11 +0000 (0:00:00.823) 0:03:28.023 ********* 2025-06-19 10:32:03.221340 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.221346 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.221353 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.221359 | orchestrator | 2025-06-19 10:32:03.221366 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-19 10:32:03.221373 | orchestrator | Thursday 19 June 2025 10:29:12 +0000 (0:00:01.246) 0:03:29.269 ********* 2025-06-19 10:32:03.221390 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.221397 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.221403 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.221410 | orchestrator | 2025-06-19 10:32:03.221416 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-19 10:32:03.221423 | orchestrator | Thursday 19 June 2025 10:29:13 +0000 (0:00:00.322) 0:03:29.592 ********* 2025-06-19 10:32:03.221430 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.221436 | orchestrator | 2025-06-19 10:32:03.221443 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-19 10:32:03.221450 | orchestrator | Thursday 19 June 2025 10:29:14 +0000 (0:00:01.381) 0:03:30.974 ********* 2025-06-19 10:32:03.221457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:32:03.221464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-19 10:32:03.221559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.221575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.221582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.221652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-19 10:32:03.221672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.221679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.221780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:32:03.221792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.221799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-19 10:32:03.221908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.221921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.221929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.221936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:32:03.221952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.222051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-19 10:32:03.222143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.222192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.222249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.222280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.222378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.222395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222402 | orchestrator | 2025-06-19 10:32:03.222412 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-19 10:32:03.222419 | orchestrator | Thursday 19 June 2025 10:29:18 +0000 (0:00:04.285) 0:03:35.259 ********* 2025-06-19 10:32:03.222467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:32:03.222476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-19 10:32:03.222517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:32:03.222590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.222695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-19 10:32:03.222707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.222854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.222914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.222925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.222943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222950 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.222956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.222966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.223014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.223024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.223050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.223057 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.223064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:32:03.223074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.223124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.223134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.223151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-19 10:32:03.223157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.223164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.223176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.223213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.223221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.223232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.223238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-19 10:32:03.223244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-19 10:32:03.223251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.223275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-19 10:32:03.223282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:32:03.223294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.223300 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.223306 | orchestrator | 2025-06-19 10:32:03.223312 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-19 10:32:03.223319 | orchestrator | Thursday 19 June 2025 10:29:20 +0000 (0:00:01.731) 0:03:36.991 ********* 2025-06-19 10:32:03.223325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-19 10:32:03.223332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-19 10:32:03.223338 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.223344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-19 10:32:03.223350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-19 10:32:03.223356 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.223430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-19 10:32:03.223449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-19 10:32:03.223455 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.223461 | orchestrator | 2025-06-19 10:32:03.223468 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-19 10:32:03.223474 | orchestrator | Thursday 19 June 2025 10:29:22 +0000 (0:00:02.030) 0:03:39.022 ********* 2025-06-19 10:32:03.223480 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.223486 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.223492 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.223498 | orchestrator | 2025-06-19 10:32:03.223504 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-19 10:32:03.223514 | orchestrator | Thursday 19 June 2025 10:29:23 +0000 (0:00:01.338) 0:03:40.360 ********* 2025-06-19 10:32:03.223521 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.223527 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.223533 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.223539 | orchestrator | 2025-06-19 10:32:03.223545 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-19 10:32:03.223551 | orchestrator | Thursday 19 June 2025 10:29:25 +0000 (0:00:02.012) 0:03:42.373 ********* 2025-06-19 10:32:03.223557 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.223568 | orchestrator | 2025-06-19 10:32:03.223574 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-19 10:32:03.223580 | orchestrator | Thursday 19 June 2025 10:29:27 +0000 (0:00:01.091) 0:03:43.464 ********* 2025-06-19 10:32:03.223610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.223619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.223626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.223632 | orchestrator | 2025-06-19 10:32:03.223639 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-19 10:32:03.223645 | orchestrator | Thursday 19 June 2025 10:29:30 +0000 (0:00:03.371) 0:03:46.836 ********* 2025-06-19 10:32:03.223655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.223666 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.223691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.223698 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.223705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.223711 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.223717 | orchestrator | 2025-06-19 10:32:03.223723 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-19 10:32:03.223775 | orchestrator | Thursday 19 June 2025 10:29:30 +0000 (0:00:00.522) 0:03:47.358 ********* 2025-06-19 10:32:03.223783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-19 10:32:03.223791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-19 10:32:03.223799 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.223806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-19 10:32:03.223813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-19 10:32:03.223821 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.223828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-19 10:32:03.223835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-19 10:32:03.223847 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.223854 | orchestrator | 2025-06-19 10:32:03.223861 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-19 10:32:03.223868 | orchestrator | Thursday 19 June 2025 10:29:31 +0000 (0:00:00.810) 0:03:48.169 ********* 2025-06-19 10:32:03.223875 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.223882 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.223889 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.223896 | orchestrator | 2025-06-19 10:32:03.223903 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-19 10:32:03.223914 | orchestrator | Thursday 19 June 2025 10:29:33 +0000 (0:00:01.848) 0:03:50.018 ********* 2025-06-19 10:32:03.223921 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.223928 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.223935 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.223942 | orchestrator | 2025-06-19 10:32:03.223949 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-19 10:32:03.223956 | orchestrator | Thursday 19 June 2025 10:29:35 +0000 (0:00:01.743) 0:03:51.762 ********* 2025-06-19 10:32:03.223963 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.223970 | orchestrator | 2025-06-19 10:32:03.223977 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-19 10:32:03.223984 | orchestrator | Thursday 19 June 2025 10:29:36 +0000 (0:00:01.495) 0:03:53.257 ********* 2025-06-19 10:32:03.224012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.224022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.224071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224079 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.224095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224114 | orchestrator | 2025-06-19 10:32:03.224121 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-19 10:32:03.224127 | orchestrator | Thursday 19 June 2025 10:29:41 +0000 (0:00:04.226) 0:03:57.484 ********* 2025-06-19 10:32:03.224153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.224161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224172 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.224178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.224191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224203 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.224225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.224232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.224247 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.224253 | orchestrator | 2025-06-19 10:32:03.224258 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-19 10:32:03.224264 | orchestrator | Thursday 19 June 2025 10:29:41 +0000 (0:00:00.930) 0:03:58.415 ********* 2025-06-19 10:32:03.224269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224297 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.224317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224356 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.224361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-19 10:32:03.224387 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.224393 | orchestrator | 2025-06-19 10:32:03.224398 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-19 10:32:03.224403 | orchestrator | Thursday 19 June 2025 10:29:42 +0000 (0:00:00.957) 0:03:59.373 ********* 2025-06-19 10:32:03.224409 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.224414 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.224419 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.224424 | orchestrator | 2025-06-19 10:32:03.224430 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-19 10:32:03.224435 | orchestrator | Thursday 19 June 2025 10:29:44 +0000 (0:00:01.309) 0:04:00.683 ********* 2025-06-19 10:32:03.224440 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.224446 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.224451 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.224456 | orchestrator | 2025-06-19 10:32:03.224461 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-19 10:32:03.224467 | orchestrator | Thursday 19 June 2025 10:29:46 +0000 (0:00:02.051) 0:04:02.735 ********* 2025-06-19 10:32:03.224472 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.224477 | orchestrator | 2025-06-19 10:32:03.224483 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-19 10:32:03.224488 | orchestrator | Thursday 19 June 2025 10:29:47 +0000 (0:00:01.519) 0:04:04.254 ********* 2025-06-19 10:32:03.224493 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-19 10:32:03.224499 | orchestrator | 2025-06-19 10:32:03.224504 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-19 10:32:03.224509 | orchestrator | Thursday 19 June 2025 10:29:48 +0000 (0:00:00.795) 0:04:05.050 ********* 2025-06-19 10:32:03.224515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-19 10:32:03.224524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-19 10:32:03.224530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-19 10:32:03.224536 | orchestrator | 2025-06-19 10:32:03.224555 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-19 10:32:03.224562 | orchestrator | Thursday 19 June 2025 10:29:52 +0000 (0:00:04.089) 0:04:09.139 ********* 2025-06-19 10:32:03.224568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-19 10:32:03.224577 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.224583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-19 10:32:03.224588 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.224594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-19 10:32:03.224599 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.224604 | orchestrator | 2025-06-19 10:32:03.224610 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-19 10:32:03.224615 | orchestrator | Thursday 19 June 2025 10:29:53 +0000 (0:00:01.260) 0:04:10.400 ********* 2025-06-19 10:32:03.224620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-19 10:32:03.224626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-19 10:32:03.224632 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.224638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-19 10:32:03.224643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-19 10:32:03.224648 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.224654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-19 10:32:03.224662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-19 10:32:03.224668 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.224673 | orchestrator | 2025-06-19 10:32:03.224679 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-19 10:32:03.224684 | orchestrator | Thursday 19 June 2025 10:29:55 +0000 (0:00:01.540) 0:04:11.940 ********* 2025-06-19 10:32:03.224689 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.224695 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.224704 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.224709 | orchestrator | 2025-06-19 10:32:03.224714 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-19 10:32:03.224720 | orchestrator | Thursday 19 June 2025 10:29:58 +0000 (0:00:02.629) 0:04:14.570 ********* 2025-06-19 10:32:03.224738 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.224744 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.224749 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.224755 | orchestrator | 2025-06-19 10:32:03.224775 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-19 10:32:03.224781 | orchestrator | Thursday 19 June 2025 10:30:00 +0000 (0:00:02.680) 0:04:17.250 ********* 2025-06-19 10:32:03.224787 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-19 10:32:03.224793 | orchestrator | 2025-06-19 10:32:03.224798 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-19 10:32:03.224803 | orchestrator | Thursday 19 June 2025 10:30:02 +0000 (0:00:01.396) 0:04:18.646 ********* 2025-06-19 10:32:03.224809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-19 10:32:03.224815 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.224821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-19 10:32:03.224826 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.224832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-19 10:32:03.224837 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.224843 | orchestrator | 2025-06-19 10:32:03.224848 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-19 10:32:03.224854 | orchestrator | Thursday 19 June 2025 10:30:03 +0000 (0:00:01.258) 0:04:19.905 ********* 2025-06-19 10:32:03.224859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-19 10:32:03.224865 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.224873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-19 10:32:03.224883 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.224889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-19 10:32:03.224895 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.224900 | orchestrator | 2025-06-19 10:32:03.224920 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-19 10:32:03.224926 | orchestrator | Thursday 19 June 2025 10:30:04 +0000 (0:00:01.246) 0:04:21.151 ********* 2025-06-19 10:32:03.224931 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.224937 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.224942 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.224947 | orchestrator | 2025-06-19 10:32:03.224953 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-19 10:32:03.224958 | orchestrator | Thursday 19 June 2025 10:30:06 +0000 (0:00:01.782) 0:04:22.934 ********* 2025-06-19 10:32:03.224964 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.224969 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.224974 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.224980 | orchestrator | 2025-06-19 10:32:03.224985 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-19 10:32:03.224991 | orchestrator | Thursday 19 June 2025 10:30:08 +0000 (0:00:02.422) 0:04:25.356 ********* 2025-06-19 10:32:03.224996 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.225002 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.225007 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.225012 | orchestrator | 2025-06-19 10:32:03.225018 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-19 10:32:03.225023 | orchestrator | Thursday 19 June 2025 10:30:12 +0000 (0:00:03.126) 0:04:28.483 ********* 2025-06-19 10:32:03.225029 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-19 10:32:03.225034 | orchestrator | 2025-06-19 10:32:03.225040 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-19 10:32:03.225045 | orchestrator | Thursday 19 June 2025 10:30:12 +0000 (0:00:00.836) 0:04:29.319 ********* 2025-06-19 10:32:03.225051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-19 10:32:03.225056 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.225062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-19 10:32:03.225071 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.225077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-19 10:32:03.225082 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.225088 | orchestrator | 2025-06-19 10:32:03.225093 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-19 10:32:03.225098 | orchestrator | Thursday 19 June 2025 10:30:14 +0000 (0:00:01.341) 0:04:30.661 ********* 2025-06-19 10:32:03.225107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-19 10:32:03.225112 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.225132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-19 10:32:03.225139 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.225144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-19 10:32:03.225150 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.225155 | orchestrator | 2025-06-19 10:32:03.225160 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-19 10:32:03.225166 | orchestrator | Thursday 19 June 2025 10:30:15 +0000 (0:00:01.395) 0:04:32.056 ********* 2025-06-19 10:32:03.225171 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.225177 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.225182 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.225187 | orchestrator | 2025-06-19 10:32:03.225193 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-19 10:32:03.225198 | orchestrator | Thursday 19 June 2025 10:30:17 +0000 (0:00:01.439) 0:04:33.496 ********* 2025-06-19 10:32:03.225204 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.225209 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.225214 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.225220 | orchestrator | 2025-06-19 10:32:03.225225 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-19 10:32:03.225234 | orchestrator | Thursday 19 June 2025 10:30:19 +0000 (0:00:02.364) 0:04:35.860 ********* 2025-06-19 10:32:03.225240 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.225245 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.225251 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.225256 | orchestrator | 2025-06-19 10:32:03.225261 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-19 10:32:03.225267 | orchestrator | Thursday 19 June 2025 10:30:22 +0000 (0:00:03.249) 0:04:39.110 ********* 2025-06-19 10:32:03.225272 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.225278 | orchestrator | 2025-06-19 10:32:03.225283 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-19 10:32:03.225289 | orchestrator | Thursday 19 June 2025 10:30:24 +0000 (0:00:01.599) 0:04:40.709 ********* 2025-06-19 10:32:03.225294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.225303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:32:03.225309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.225347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.225353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:32:03.225358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.225394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.225403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:32:03.225409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.225430 | orchestrator | 2025-06-19 10:32:03.225436 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-19 10:32:03.225441 | orchestrator | Thursday 19 June 2025 10:30:28 +0000 (0:00:03.747) 0:04:44.457 ********* 2025-06-19 10:32:03.225462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.225473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:32:03.225479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.225496 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.225504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.225525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:32:03.225535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.225552 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.225557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.225566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:32:03.225586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:32:03.225602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:32:03.225607 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.225613 | orchestrator | 2025-06-19 10:32:03.225618 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-19 10:32:03.225624 | orchestrator | Thursday 19 June 2025 10:30:29 +0000 (0:00:01.328) 0:04:45.785 ********* 2025-06-19 10:32:03.225629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-19 10:32:03.225635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-19 10:32:03.225641 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.225646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-19 10:32:03.225652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-19 10:32:03.225657 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.225663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-19 10:32:03.225668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-19 10:32:03.225674 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.225679 | orchestrator | 2025-06-19 10:32:03.225684 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-19 10:32:03.225690 | orchestrator | Thursday 19 June 2025 10:30:30 +0000 (0:00:00.990) 0:04:46.776 ********* 2025-06-19 10:32:03.225695 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.225701 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.225706 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.225711 | orchestrator | 2025-06-19 10:32:03.225717 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-19 10:32:03.225722 | orchestrator | Thursday 19 June 2025 10:30:31 +0000 (0:00:01.383) 0:04:48.160 ********* 2025-06-19 10:32:03.225745 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.225751 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.225760 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.225765 | orchestrator | 2025-06-19 10:32:03.225770 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-19 10:32:03.225776 | orchestrator | Thursday 19 June 2025 10:30:33 +0000 (0:00:02.055) 0:04:50.215 ********* 2025-06-19 10:32:03.225781 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.225787 | orchestrator | 2025-06-19 10:32:03.225792 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-19 10:32:03.225797 | orchestrator | Thursday 19 June 2025 10:30:35 +0000 (0:00:01.589) 0:04:51.805 ********* 2025-06-19 10:32:03.225820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:32:03.225827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:32:03.225833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:32:03.225839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:32:03.225868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:32:03.225876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:32:03.225882 | orchestrator | 2025-06-19 10:32:03.225887 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-19 10:32:03.225893 | orchestrator | Thursday 19 June 2025 10:30:40 +0000 (0:00:05.111) 0:04:56.917 ********* 2025-06-19 10:32:03.225898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-19 10:32:03.225907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-19 10:32:03.225917 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.225937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-19 10:32:03.225944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-19 10:32:03.225950 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.225956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-19 10:32:03.225961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-19 10:32:03.225971 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.225976 | orchestrator | 2025-06-19 10:32:03.225982 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-19 10:32:03.225990 | orchestrator | Thursday 19 June 2025 10:30:41 +0000 (0:00:00.687) 0:04:57.604 ********* 2025-06-19 10:32:03.225996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-19 10:32:03.226001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-19 10:32:03.226007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-19 10:32:03.226056 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.226065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-19 10:32:03.226071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-19 10:32:03.226076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-19 10:32:03.226082 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.226087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-19 10:32:03.226093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-19 10:32:03.226098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-19 10:32:03.226104 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.226109 | orchestrator | 2025-06-19 10:32:03.226115 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-19 10:32:03.226120 | orchestrator | Thursday 19 June 2025 10:30:42 +0000 (0:00:01.561) 0:04:59.166 ********* 2025-06-19 10:32:03.226125 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.226131 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.226136 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.226141 | orchestrator | 2025-06-19 10:32:03.226147 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-19 10:32:03.226152 | orchestrator | Thursday 19 June 2025 10:30:43 +0000 (0:00:00.438) 0:04:59.604 ********* 2025-06-19 10:32:03.226162 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.226167 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.226172 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.226178 | orchestrator | 2025-06-19 10:32:03.226183 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-19 10:32:03.226188 | orchestrator | Thursday 19 June 2025 10:30:44 +0000 (0:00:01.318) 0:05:00.922 ********* 2025-06-19 10:32:03.226194 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.226199 | orchestrator | 2025-06-19 10:32:03.226204 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-19 10:32:03.226210 | orchestrator | Thursday 19 June 2025 10:30:46 +0000 (0:00:01.663) 0:05:02.586 ********* 2025-06-19 10:32:03.226216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-19 10:32:03.226227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-19 10:32:03.226250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:32:03.226258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:32:03.226264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-19 10:32:03.226329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:32:03.226335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-19 10:32:03.226367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-19 10:32:03.226374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-19 10:32:03.226383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-19 10:32:03.226389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-19 10:32:03.226439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-19 10:32:03.226447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226467 | orchestrator | 2025-06-19 10:32:03.226472 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-19 10:32:03.226477 | orchestrator | Thursday 19 June 2025 10:30:50 +0000 (0:00:04.112) 0:05:06.698 ********* 2025-06-19 10:32:03.226483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-19 10:32:03.226492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:32:03.226498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-19 10:32:03.226527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-19 10:32:03.226537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226553 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.226561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-19 10:32:03.226567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:32:03.226576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-19 10:32:03.226596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:32:03.226610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-19 10:32:03.226619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-19 10:32:03.226638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-19 10:32:03.226675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226681 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.226687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-19 10:32:03.226692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:32:03.226703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:32:03.226709 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.226714 | orchestrator | 2025-06-19 10:32:03.226720 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-19 10:32:03.226737 | orchestrator | Thursday 19 June 2025 10:30:51 +0000 (0:00:00.874) 0:05:07.572 ********* 2025-06-19 10:32:03.226743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-19 10:32:03.226749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-19 10:32:03.226754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-19 10:32:03.226801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-19 10:32:03.226817 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.226823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-19 10:32:03.226829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-19 10:32:03.226835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-19 10:32:03.226840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-19 10:32:03.226846 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.226852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-19 10:32:03.226857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-19 10:32:03.226863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-19 10:32:03.226868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-19 10:32:03.226874 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.226879 | orchestrator | 2025-06-19 10:32:03.226884 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-19 10:32:03.226890 | orchestrator | Thursday 19 June 2025 10:30:52 +0000 (0:00:01.274) 0:05:08.847 ********* 2025-06-19 10:32:03.226895 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.226901 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.226906 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.226911 | orchestrator | 2025-06-19 10:32:03.226916 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-19 10:32:03.226922 | orchestrator | Thursday 19 June 2025 10:30:52 +0000 (0:00:00.458) 0:05:09.305 ********* 2025-06-19 10:32:03.226927 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.226932 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.226938 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.226943 | orchestrator | 2025-06-19 10:32:03.226948 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-19 10:32:03.226954 | orchestrator | Thursday 19 June 2025 10:30:54 +0000 (0:00:01.316) 0:05:10.621 ********* 2025-06-19 10:32:03.226963 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.226969 | orchestrator | 2025-06-19 10:32:03.226974 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-19 10:32:03.226979 | orchestrator | Thursday 19 June 2025 10:30:55 +0000 (0:00:01.442) 0:05:12.064 ********* 2025-06-19 10:32:03.226992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:32:03.226998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:32:03.227004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-19 10:32:03.227010 | orchestrator | 2025-06-19 10:32:03.227016 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-19 10:32:03.227021 | orchestrator | Thursday 19 June 2025 10:30:58 +0000 (0:00:02.528) 0:05:14.593 ********* 2025-06-19 10:32:03.227027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-19 10:32:03.227036 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-19 10:32:03.227053 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-19 10:32:03.227064 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227070 | orchestrator | 2025-06-19 10:32:03.227075 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-19 10:32:03.227080 | orchestrator | Thursday 19 June 2025 10:30:58 +0000 (0:00:00.433) 0:05:15.026 ********* 2025-06-19 10:32:03.227086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-19 10:32:03.227091 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-19 10:32:03.227102 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-19 10:32:03.227112 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227118 | orchestrator | 2025-06-19 10:32:03.227123 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-19 10:32:03.227129 | orchestrator | Thursday 19 June 2025 10:30:59 +0000 (0:00:00.620) 0:05:15.647 ********* 2025-06-19 10:32:03.227138 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227143 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227149 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227154 | orchestrator | 2025-06-19 10:32:03.227159 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-19 10:32:03.227165 | orchestrator | Thursday 19 June 2025 10:31:00 +0000 (0:00:00.831) 0:05:16.478 ********* 2025-06-19 10:32:03.227170 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227175 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227181 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227186 | orchestrator | 2025-06-19 10:32:03.227191 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-19 10:32:03.227196 | orchestrator | Thursday 19 June 2025 10:31:01 +0000 (0:00:01.318) 0:05:17.797 ********* 2025-06-19 10:32:03.227202 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:32:03.227207 | orchestrator | 2025-06-19 10:32:03.227212 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-19 10:32:03.227218 | orchestrator | Thursday 19 June 2025 10:31:02 +0000 (0:00:01.471) 0:05:19.269 ********* 2025-06-19 10:32:03.227226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.227235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.227241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.227250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.227259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.227268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-19 10:32:03.227274 | orchestrator | 2025-06-19 10:32:03.227279 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-19 10:32:03.227285 | orchestrator | Thursday 19 June 2025 10:31:09 +0000 (0:00:06.711) 0:05:25.980 ********* 2025-06-19 10:32:03.227290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.227300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.227306 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.227322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.227328 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.227339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-19 10:32:03.227349 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227354 | orchestrator | 2025-06-19 10:32:03.227360 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-19 10:32:03.227365 | orchestrator | Thursday 19 June 2025 10:31:10 +0000 (0:00:01.223) 0:05:27.204 ********* 2025-06-19 10:32:03.227371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227392 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227437 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-19 10:32:03.227457 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227462 | orchestrator | 2025-06-19 10:32:03.227467 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-19 10:32:03.227473 | orchestrator | Thursday 19 June 2025 10:31:11 +0000 (0:00:00.983) 0:05:28.187 ********* 2025-06-19 10:32:03.227478 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.227483 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.227489 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.227494 | orchestrator | 2025-06-19 10:32:03.227499 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-19 10:32:03.227505 | orchestrator | Thursday 19 June 2025 10:31:13 +0000 (0:00:01.293) 0:05:29.480 ********* 2025-06-19 10:32:03.227510 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.227515 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.227521 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.227526 | orchestrator | 2025-06-19 10:32:03.227531 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-19 10:32:03.227537 | orchestrator | Thursday 19 June 2025 10:31:15 +0000 (0:00:02.239) 0:05:31.720 ********* 2025-06-19 10:32:03.227542 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227547 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227552 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227558 | orchestrator | 2025-06-19 10:32:03.227563 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-19 10:32:03.227568 | orchestrator | Thursday 19 June 2025 10:31:15 +0000 (0:00:00.632) 0:05:32.353 ********* 2025-06-19 10:32:03.227574 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227579 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227584 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227590 | orchestrator | 2025-06-19 10:32:03.227595 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-19 10:32:03.227600 | orchestrator | Thursday 19 June 2025 10:31:16 +0000 (0:00:00.340) 0:05:32.693 ********* 2025-06-19 10:32:03.227605 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227611 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227616 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227621 | orchestrator | 2025-06-19 10:32:03.227627 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-19 10:32:03.227632 | orchestrator | Thursday 19 June 2025 10:31:16 +0000 (0:00:00.302) 0:05:32.995 ********* 2025-06-19 10:32:03.227637 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227643 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227648 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227653 | orchestrator | 2025-06-19 10:32:03.227658 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-19 10:32:03.227664 | orchestrator | Thursday 19 June 2025 10:31:16 +0000 (0:00:00.310) 0:05:33.306 ********* 2025-06-19 10:32:03.227669 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227674 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227680 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227685 | orchestrator | 2025-06-19 10:32:03.227690 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-19 10:32:03.227696 | orchestrator | Thursday 19 June 2025 10:31:17 +0000 (0:00:00.592) 0:05:33.898 ********* 2025-06-19 10:32:03.227701 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.227706 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.227711 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.227717 | orchestrator | 2025-06-19 10:32:03.227722 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-19 10:32:03.227763 | orchestrator | Thursday 19 June 2025 10:31:18 +0000 (0:00:00.542) 0:05:34.440 ********* 2025-06-19 10:32:03.227775 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.227781 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.227786 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.227791 | orchestrator | 2025-06-19 10:32:03.227800 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-19 10:32:03.227805 | orchestrator | Thursday 19 June 2025 10:31:18 +0000 (0:00:00.698) 0:05:35.139 ********* 2025-06-19 10:32:03.227810 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.227816 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.227821 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.227826 | orchestrator | 2025-06-19 10:32:03.227832 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-19 10:32:03.227837 | orchestrator | Thursday 19 June 2025 10:31:19 +0000 (0:00:00.649) 0:05:35.788 ********* 2025-06-19 10:32:03.227842 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.227847 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.227853 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.227858 | orchestrator | 2025-06-19 10:32:03.227863 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-19 10:32:03.227869 | orchestrator | Thursday 19 June 2025 10:31:20 +0000 (0:00:00.937) 0:05:36.725 ********* 2025-06-19 10:32:03.227874 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.227879 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.227888 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.227893 | orchestrator | 2025-06-19 10:32:03.227898 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-19 10:32:03.227904 | orchestrator | Thursday 19 June 2025 10:31:21 +0000 (0:00:00.953) 0:05:37.679 ********* 2025-06-19 10:32:03.227909 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.227914 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.227920 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.227925 | orchestrator | 2025-06-19 10:32:03.227930 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-19 10:32:03.227936 | orchestrator | Thursday 19 June 2025 10:31:22 +0000 (0:00:00.865) 0:05:38.544 ********* 2025-06-19 10:32:03.227941 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.227946 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.227952 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.227957 | orchestrator | 2025-06-19 10:32:03.227962 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-19 10:32:03.227968 | orchestrator | Thursday 19 June 2025 10:31:32 +0000 (0:00:10.144) 0:05:48.689 ********* 2025-06-19 10:32:03.227973 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.227978 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.227984 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.227989 | orchestrator | 2025-06-19 10:32:03.227994 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-19 10:32:03.228000 | orchestrator | Thursday 19 June 2025 10:31:33 +0000 (0:00:00.785) 0:05:49.474 ********* 2025-06-19 10:32:03.228005 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.228010 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.228016 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.228021 | orchestrator | 2025-06-19 10:32:03.228026 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-19 10:32:03.228032 | orchestrator | Thursday 19 June 2025 10:31:45 +0000 (0:00:12.898) 0:06:02.373 ********* 2025-06-19 10:32:03.228037 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.228042 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.228048 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.228053 | orchestrator | 2025-06-19 10:32:03.228058 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-19 10:32:03.228063 | orchestrator | Thursday 19 June 2025 10:31:46 +0000 (0:00:00.751) 0:06:03.124 ********* 2025-06-19 10:32:03.228069 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:32:03.228074 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:32:03.228084 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:32:03.228089 | orchestrator | 2025-06-19 10:32:03.228094 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-19 10:32:03.228100 | orchestrator | Thursday 19 June 2025 10:31:56 +0000 (0:00:09.658) 0:06:12.783 ********* 2025-06-19 10:32:03.228105 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.228110 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.228116 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.228121 | orchestrator | 2025-06-19 10:32:03.228126 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-19 10:32:03.228132 | orchestrator | Thursday 19 June 2025 10:31:56 +0000 (0:00:00.349) 0:06:13.133 ********* 2025-06-19 10:32:03.228137 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.228142 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.228148 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.228153 | orchestrator | 2025-06-19 10:32:03.228158 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-19 10:32:03.228164 | orchestrator | Thursday 19 June 2025 10:31:57 +0000 (0:00:00.354) 0:06:13.488 ********* 2025-06-19 10:32:03.228169 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.228174 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.228180 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.228185 | orchestrator | 2025-06-19 10:32:03.228190 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-19 10:32:03.228196 | orchestrator | Thursday 19 June 2025 10:31:57 +0000 (0:00:00.347) 0:06:13.835 ********* 2025-06-19 10:32:03.228201 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.228206 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.228211 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.228216 | orchestrator | 2025-06-19 10:32:03.228220 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-19 10:32:03.228225 | orchestrator | Thursday 19 June 2025 10:31:57 +0000 (0:00:00.340) 0:06:14.175 ********* 2025-06-19 10:32:03.228230 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.228235 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.228239 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.228244 | orchestrator | 2025-06-19 10:32:03.228249 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-19 10:32:03.228254 | orchestrator | Thursday 19 June 2025 10:31:58 +0000 (0:00:00.740) 0:06:14.915 ********* 2025-06-19 10:32:03.228258 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:32:03.228263 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:32:03.228268 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:32:03.228272 | orchestrator | 2025-06-19 10:32:03.228277 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-19 10:32:03.228284 | orchestrator | Thursday 19 June 2025 10:31:58 +0000 (0:00:00.351) 0:06:15.267 ********* 2025-06-19 10:32:03.228289 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.228294 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.228299 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.228303 | orchestrator | 2025-06-19 10:32:03.228308 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-19 10:32:03.228313 | orchestrator | Thursday 19 June 2025 10:31:59 +0000 (0:00:00.861) 0:06:16.129 ********* 2025-06-19 10:32:03.228318 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:32:03.228322 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:32:03.228327 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:32:03.228332 | orchestrator | 2025-06-19 10:32:03.228336 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:32:03.228341 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-19 10:32:03.228349 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-19 10:32:03.228357 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-19 10:32:03.228362 | orchestrator | 2025-06-19 10:32:03.228367 | orchestrator | 2025-06-19 10:32:03.228371 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:32:03.228376 | orchestrator | Thursday 19 June 2025 10:32:00 +0000 (0:00:01.178) 0:06:17.308 ********* 2025-06-19 10:32:03.228381 | orchestrator | =============================================================================== 2025-06-19 10:32:03.228386 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 12.90s 2025-06-19 10:32:03.228390 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.14s 2025-06-19 10:32:03.228395 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.66s 2025-06-19 10:32:03.228400 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.71s 2025-06-19 10:32:03.228404 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.20s 2025-06-19 10:32:03.228409 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.11s 2025-06-19 10:32:03.228414 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.08s 2025-06-19 10:32:03.228418 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.84s 2025-06-19 10:32:03.228423 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.56s 2025-06-19 10:32:03.228428 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.56s 2025-06-19 10:32:03.228432 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.29s 2025-06-19 10:32:03.228437 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.23s 2025-06-19 10:32:03.228442 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.17s 2025-06-19 10:32:03.228446 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.11s 2025-06-19 10:32:03.228451 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.09s 2025-06-19 10:32:03.228456 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.92s 2025-06-19 10:32:03.228461 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.89s 2025-06-19 10:32:03.228465 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.88s 2025-06-19 10:32:03.228470 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.79s 2025-06-19 10:32:03.228474 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.75s 2025-06-19 10:32:03.228479 | orchestrator | 2025-06-19 10:32:03 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:03.228484 | orchestrator | 2025-06-19 10:32:03 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:03.228489 | orchestrator | 2025-06-19 10:32:03 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:03.228494 | orchestrator | 2025-06-19 10:32:03 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:06.262383 | orchestrator | 2025-06-19 10:32:06 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:06.262550 | orchestrator | 2025-06-19 10:32:06 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:06.263386 | orchestrator | 2025-06-19 10:32:06 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:06.263592 | orchestrator | 2025-06-19 10:32:06 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:09.311026 | orchestrator | 2025-06-19 10:32:09 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:09.311162 | orchestrator | 2025-06-19 10:32:09 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:09.311511 | orchestrator | 2025-06-19 10:32:09 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:09.311537 | orchestrator | 2025-06-19 10:32:09 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:12.346500 | orchestrator | 2025-06-19 10:32:12 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:12.346617 | orchestrator | 2025-06-19 10:32:12 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:12.347273 | orchestrator | 2025-06-19 10:32:12 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:12.347304 | orchestrator | 2025-06-19 10:32:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:15.397290 | orchestrator | 2025-06-19 10:32:15 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:15.397395 | orchestrator | 2025-06-19 10:32:15 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:15.398147 | orchestrator | 2025-06-19 10:32:15 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:15.398173 | orchestrator | 2025-06-19 10:32:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:18.435027 | orchestrator | 2025-06-19 10:32:18 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:18.435137 | orchestrator | 2025-06-19 10:32:18 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:18.435151 | orchestrator | 2025-06-19 10:32:18 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:18.435163 | orchestrator | 2025-06-19 10:32:18 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:21.474395 | orchestrator | 2025-06-19 10:32:21 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:21.474494 | orchestrator | 2025-06-19 10:32:21 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:21.474765 | orchestrator | 2025-06-19 10:32:21 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:21.474789 | orchestrator | 2025-06-19 10:32:21 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:24.515101 | orchestrator | 2025-06-19 10:32:24 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:24.515200 | orchestrator | 2025-06-19 10:32:24 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:24.515217 | orchestrator | 2025-06-19 10:32:24 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:24.515229 | orchestrator | 2025-06-19 10:32:24 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:27.550989 | orchestrator | 2025-06-19 10:32:27 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:27.551110 | orchestrator | 2025-06-19 10:32:27 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:27.551861 | orchestrator | 2025-06-19 10:32:27 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:27.551890 | orchestrator | 2025-06-19 10:32:27 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:30.598517 | orchestrator | 2025-06-19 10:32:30 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:30.599026 | orchestrator | 2025-06-19 10:32:30 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:30.599949 | orchestrator | 2025-06-19 10:32:30 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:30.599975 | orchestrator | 2025-06-19 10:32:30 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:33.635303 | orchestrator | 2025-06-19 10:32:33 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:33.636540 | orchestrator | 2025-06-19 10:32:33 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:33.638180 | orchestrator | 2025-06-19 10:32:33 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:33.638220 | orchestrator | 2025-06-19 10:32:33 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:36.685100 | orchestrator | 2025-06-19 10:32:36 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:36.685222 | orchestrator | 2025-06-19 10:32:36 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:36.686349 | orchestrator | 2025-06-19 10:32:36 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:36.686379 | orchestrator | 2025-06-19 10:32:36 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:39.723427 | orchestrator | 2025-06-19 10:32:39 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:39.726013 | orchestrator | 2025-06-19 10:32:39 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:39.733203 | orchestrator | 2025-06-19 10:32:39 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:39.733239 | orchestrator | 2025-06-19 10:32:39 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:42.787194 | orchestrator | 2025-06-19 10:32:42 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:42.788888 | orchestrator | 2025-06-19 10:32:42 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:42.790385 | orchestrator | 2025-06-19 10:32:42 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:42.790415 | orchestrator | 2025-06-19 10:32:42 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:45.862797 | orchestrator | 2025-06-19 10:32:45 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:45.864058 | orchestrator | 2025-06-19 10:32:45 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:45.865672 | orchestrator | 2025-06-19 10:32:45 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:45.865701 | orchestrator | 2025-06-19 10:32:45 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:48.908295 | orchestrator | 2025-06-19 10:32:48 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:48.910103 | orchestrator | 2025-06-19 10:32:48 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:48.912066 | orchestrator | 2025-06-19 10:32:48 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:48.912095 | orchestrator | 2025-06-19 10:32:48 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:51.961846 | orchestrator | 2025-06-19 10:32:51 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:51.963355 | orchestrator | 2025-06-19 10:32:51 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:51.965280 | orchestrator | 2025-06-19 10:32:51 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:51.965370 | orchestrator | 2025-06-19 10:32:51 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:55.022364 | orchestrator | 2025-06-19 10:32:55 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:55.023123 | orchestrator | 2025-06-19 10:32:55 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:55.024135 | orchestrator | 2025-06-19 10:32:55 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:55.025029 | orchestrator | 2025-06-19 10:32:55 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:32:58.085974 | orchestrator | 2025-06-19 10:32:58 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:32:58.088310 | orchestrator | 2025-06-19 10:32:58 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:32:58.089099 | orchestrator | 2025-06-19 10:32:58 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:32:58.089138 | orchestrator | 2025-06-19 10:32:58 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:01.135187 | orchestrator | 2025-06-19 10:33:01 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:01.135820 | orchestrator | 2025-06-19 10:33:01 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:01.136993 | orchestrator | 2025-06-19 10:33:01 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:01.137017 | orchestrator | 2025-06-19 10:33:01 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:04.181948 | orchestrator | 2025-06-19 10:33:04 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:04.183499 | orchestrator | 2025-06-19 10:33:04 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:04.185401 | orchestrator | 2025-06-19 10:33:04 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:04.185443 | orchestrator | 2025-06-19 10:33:04 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:07.230766 | orchestrator | 2025-06-19 10:33:07 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:07.232343 | orchestrator | 2025-06-19 10:33:07 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:07.235952 | orchestrator | 2025-06-19 10:33:07 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:07.235987 | orchestrator | 2025-06-19 10:33:07 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:10.287926 | orchestrator | 2025-06-19 10:33:10 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:10.288183 | orchestrator | 2025-06-19 10:33:10 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:10.289025 | orchestrator | 2025-06-19 10:33:10 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:10.289049 | orchestrator | 2025-06-19 10:33:10 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:13.340014 | orchestrator | 2025-06-19 10:33:13 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:13.342355 | orchestrator | 2025-06-19 10:33:13 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:13.345234 | orchestrator | 2025-06-19 10:33:13 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:13.345715 | orchestrator | 2025-06-19 10:33:13 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:16.402303 | orchestrator | 2025-06-19 10:33:16 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:16.404957 | orchestrator | 2025-06-19 10:33:16 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:16.406795 | orchestrator | 2025-06-19 10:33:16 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:16.407094 | orchestrator | 2025-06-19 10:33:16 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:19.446360 | orchestrator | 2025-06-19 10:33:19 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:19.447913 | orchestrator | 2025-06-19 10:33:19 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:19.449783 | orchestrator | 2025-06-19 10:33:19 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:19.449815 | orchestrator | 2025-06-19 10:33:19 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:22.496343 | orchestrator | 2025-06-19 10:33:22 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:22.503453 | orchestrator | 2025-06-19 10:33:22 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:22.503545 | orchestrator | 2025-06-19 10:33:22 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:22.503558 | orchestrator | 2025-06-19 10:33:22 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:25.556420 | orchestrator | 2025-06-19 10:33:25 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:25.557668 | orchestrator | 2025-06-19 10:33:25 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:25.559996 | orchestrator | 2025-06-19 10:33:25 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:25.560490 | orchestrator | 2025-06-19 10:33:25 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:28.608318 | orchestrator | 2025-06-19 10:33:28 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:28.609687 | orchestrator | 2025-06-19 10:33:28 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:28.611337 | orchestrator | 2025-06-19 10:33:28 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:28.612182 | orchestrator | 2025-06-19 10:33:28 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:31.661785 | orchestrator | 2025-06-19 10:33:31 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state STARTED 2025-06-19 10:33:31.662595 | orchestrator | 2025-06-19 10:33:31 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:31.664543 | orchestrator | 2025-06-19 10:33:31 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:31.664569 | orchestrator | 2025-06-19 10:33:31 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:34.708083 | orchestrator | 2025-06-19 10:33:34 | INFO  | Task 46f364df-f39a-4554-819f-848f204d4006 is in state SUCCESS 2025-06-19 10:33:34.710295 | orchestrator | 2025-06-19 10:33:34.710588 | orchestrator | 2025-06-19 10:33:34.710613 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-19 10:33:34.710625 | orchestrator | 2025-06-19 10:33:34.710637 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-19 10:33:34.710648 | orchestrator | Thursday 19 June 2025 10:22:21 +0000 (0:00:00.785) 0:00:00.785 ********* 2025-06-19 10:33:34.713981 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.714013 | orchestrator | 2025-06-19 10:33:34.714088 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-19 10:33:34.714099 | orchestrator | Thursday 19 June 2025 10:22:22 +0000 (0:00:00.931) 0:00:01.716 ********* 2025-06-19 10:33:34.714110 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.714122 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.714133 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.714144 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.714154 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.714165 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.714176 | orchestrator | 2025-06-19 10:33:34.714186 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-19 10:33:34.714197 | orchestrator | Thursday 19 June 2025 10:22:24 +0000 (0:00:01.661) 0:00:03.378 ********* 2025-06-19 10:33:34.714208 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.714219 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.714229 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.714240 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.714250 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.714260 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.714271 | orchestrator | 2025-06-19 10:33:34.714281 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-19 10:33:34.714292 | orchestrator | Thursday 19 June 2025 10:22:24 +0000 (0:00:00.842) 0:00:04.220 ********* 2025-06-19 10:33:34.714303 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.714314 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.714325 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.714335 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.714346 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.714356 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.714366 | orchestrator | 2025-06-19 10:33:34.714377 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-19 10:33:34.714388 | orchestrator | Thursday 19 June 2025 10:22:26 +0000 (0:00:01.054) 0:00:05.275 ********* 2025-06-19 10:33:34.714398 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.714409 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.714419 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.714430 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.714467 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.714479 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.714489 | orchestrator | 2025-06-19 10:33:34.714500 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-19 10:33:34.714510 | orchestrator | Thursday 19 June 2025 10:22:26 +0000 (0:00:00.677) 0:00:05.953 ********* 2025-06-19 10:33:34.714521 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.714531 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.714542 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.714552 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.714563 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.714573 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.714584 | orchestrator | 2025-06-19 10:33:34.714597 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-19 10:33:34.714609 | orchestrator | Thursday 19 June 2025 10:22:27 +0000 (0:00:00.622) 0:00:06.576 ********* 2025-06-19 10:33:34.714621 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.714633 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.714645 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.714656 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.714668 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.714680 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.714692 | orchestrator | 2025-06-19 10:33:34.714704 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-19 10:33:34.714730 | orchestrator | Thursday 19 June 2025 10:22:28 +0000 (0:00:00.846) 0:00:07.422 ********* 2025-06-19 10:33:34.714743 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.714755 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.714767 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.714779 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.714791 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.714803 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.714815 | orchestrator | 2025-06-19 10:33:34.714827 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-19 10:33:34.714840 | orchestrator | Thursday 19 June 2025 10:22:28 +0000 (0:00:00.682) 0:00:08.105 ********* 2025-06-19 10:33:34.714852 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.714864 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.714875 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.714887 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.714899 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.714911 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.714923 | orchestrator | 2025-06-19 10:33:34.714935 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-19 10:33:34.714946 | orchestrator | Thursday 19 June 2025 10:22:29 +0000 (0:00:01.159) 0:00:09.264 ********* 2025-06-19 10:33:34.714957 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-19 10:33:34.714968 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-19 10:33:34.714978 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-19 10:33:34.714989 | orchestrator | 2025-06-19 10:33:34.715000 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-19 10:33:34.715018 | orchestrator | Thursday 19 June 2025 10:22:30 +0000 (0:00:00.632) 0:00:09.896 ********* 2025-06-19 10:33:34.715029 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.715040 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.715050 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.715061 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.715071 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.715081 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.715092 | orchestrator | 2025-06-19 10:33:34.715124 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-19 10:33:34.715136 | orchestrator | Thursday 19 June 2025 10:22:31 +0000 (0:00:00.959) 0:00:10.856 ********* 2025-06-19 10:33:34.715146 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-19 10:33:34.715157 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-19 10:33:34.715168 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-19 10:33:34.715178 | orchestrator | 2025-06-19 10:33:34.715189 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-19 10:33:34.715199 | orchestrator | Thursday 19 June 2025 10:22:34 +0000 (0:00:03.024) 0:00:13.880 ********* 2025-06-19 10:33:34.715210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-19 10:33:34.715221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-19 10:33:34.715232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-19 10:33:34.715242 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.715253 | orchestrator | 2025-06-19 10:33:34.715263 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-19 10:33:34.715274 | orchestrator | Thursday 19 June 2025 10:22:35 +0000 (0:00:00.450) 0:00:14.331 ********* 2025-06-19 10:33:34.715286 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.715313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.715324 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.715335 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.715346 | orchestrator | 2025-06-19 10:33:34.715356 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-19 10:33:34.715367 | orchestrator | Thursday 19 June 2025 10:22:35 +0000 (0:00:00.735) 0:00:15.066 ********* 2025-06-19 10:33:34.715380 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.715393 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.715404 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.715415 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.715425 | orchestrator | 2025-06-19 10:33:34.715477 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-19 10:33:34.715491 | orchestrator | Thursday 19 June 2025 10:22:36 +0000 (0:00:00.257) 0:00:15.324 ********* 2025-06-19 10:33:34.715520 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-19 10:22:32.301122', 'end': '2025-06-19 10:22:32.583977', 'delta': '0:00:00.282855', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.715534 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-19 10:22:33.287263', 'end': '2025-06-19 10:22:33.560960', 'delta': '0:00:00.273697', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.715546 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-19 10:22:34.066887', 'end': '2025-06-19 10:22:34.377278', 'delta': '0:00:00.310391', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.715565 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.715576 | orchestrator | 2025-06-19 10:33:34.715586 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-19 10:33:34.715597 | orchestrator | Thursday 19 June 2025 10:22:36 +0000 (0:00:00.575) 0:00:15.899 ********* 2025-06-19 10:33:34.715608 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.715618 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.715629 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.715640 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.715650 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.715661 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.715671 | orchestrator | 2025-06-19 10:33:34.715682 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-19 10:33:34.715692 | orchestrator | Thursday 19 June 2025 10:22:38 +0000 (0:00:01.722) 0:00:17.621 ********* 2025-06-19 10:33:34.715703 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:33:34.715714 | orchestrator | 2025-06-19 10:33:34.715724 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-19 10:33:34.715735 | orchestrator | Thursday 19 June 2025 10:22:39 +0000 (0:00:00.920) 0:00:18.542 ********* 2025-06-19 10:33:34.715746 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.715756 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.715767 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.715778 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.715788 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.715799 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.715810 | orchestrator | 2025-06-19 10:33:34.715821 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-19 10:33:34.715837 | orchestrator | Thursday 19 June 2025 10:22:40 +0000 (0:00:01.097) 0:00:19.639 ********* 2025-06-19 10:33:34.715854 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.715865 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.715875 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.715886 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.715896 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.715907 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.715918 | orchestrator | 2025-06-19 10:33:34.715928 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-19 10:33:34.715939 | orchestrator | Thursday 19 June 2025 10:22:41 +0000 (0:00:01.107) 0:00:20.747 ********* 2025-06-19 10:33:34.715976 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.715987 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.715998 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.716008 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.716019 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.716030 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.716040 | orchestrator | 2025-06-19 10:33:34.716051 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-19 10:33:34.716062 | orchestrator | Thursday 19 June 2025 10:22:42 +0000 (0:00:00.928) 0:00:21.675 ********* 2025-06-19 10:33:34.716073 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.716083 | orchestrator | 2025-06-19 10:33:34.716094 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-19 10:33:34.716112 | orchestrator | Thursday 19 June 2025 10:22:42 +0000 (0:00:00.168) 0:00:21.844 ********* 2025-06-19 10:33:34.716123 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.716133 | orchestrator | 2025-06-19 10:33:34.716144 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-19 10:33:34.716154 | orchestrator | Thursday 19 June 2025 10:22:42 +0000 (0:00:00.254) 0:00:22.099 ********* 2025-06-19 10:33:34.716170 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.716181 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.716191 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.716202 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.716213 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.716223 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.716234 | orchestrator | 2025-06-19 10:33:34.716263 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-19 10:33:34.716275 | orchestrator | Thursday 19 June 2025 10:22:43 +0000 (0:00:00.834) 0:00:22.933 ********* 2025-06-19 10:33:34.716286 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.716297 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.716307 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.716317 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.716328 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.716339 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.716350 | orchestrator | 2025-06-19 10:33:34.716361 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-19 10:33:34.716373 | orchestrator | Thursday 19 June 2025 10:22:44 +0000 (0:00:01.179) 0:00:24.113 ********* 2025-06-19 10:33:34.716383 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.716394 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.716405 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.716415 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.716426 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.716509 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.716529 | orchestrator | 2025-06-19 10:33:34.716540 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-19 10:33:34.716552 | orchestrator | Thursday 19 June 2025 10:22:46 +0000 (0:00:01.196) 0:00:25.309 ********* 2025-06-19 10:33:34.716563 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.716573 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.716584 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.716594 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.716605 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.716615 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.716626 | orchestrator | 2025-06-19 10:33:34.716636 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-19 10:33:34.716647 | orchestrator | Thursday 19 June 2025 10:22:47 +0000 (0:00:01.107) 0:00:26.416 ********* 2025-06-19 10:33:34.716657 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.716668 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.716679 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.716689 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.716699 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.716710 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.716751 | orchestrator | 2025-06-19 10:33:34.716762 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-19 10:33:34.716773 | orchestrator | Thursday 19 June 2025 10:22:47 +0000 (0:00:00.605) 0:00:27.021 ********* 2025-06-19 10:33:34.716783 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.716794 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.716804 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.716815 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.716825 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.716845 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.716855 | orchestrator | 2025-06-19 10:33:34.716866 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-19 10:33:34.716877 | orchestrator | Thursday 19 June 2025 10:22:48 +0000 (0:00:00.764) 0:00:27.785 ********* 2025-06-19 10:33:34.716888 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.716899 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.716909 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.716919 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.716930 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.716940 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.716951 | orchestrator | 2025-06-19 10:33:34.716961 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-19 10:33:34.716972 | orchestrator | Thursday 19 June 2025 10:22:49 +0000 (0:00:00.788) 0:00:28.574 ********* 2025-06-19 10:33:34.716984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f69fe47--683a--554f--92f7--031e2a26df27-osd--block--3f69fe47--683a--554f--92f7--031e2a26df27', 'dm-uuid-LVM-3FbwjtgmDfoYI2HFMVZ7etdFItcyZ0uA120tcIDhw0ksPX9thSpC6lMqPATNVSsQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.716997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04cfa187--5820--5d05--93de--747bac6f19c1-osd--block--04cfa187--5820--5d05--93de--747bac6f19c1', 'dm-uuid-LVM-MJpIKKReme2cd0ENNcgCir2ui8Foeckc0XwTlPW1Xwlo0ltlGpmVFKsVBWqITycn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717030 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3f69fe47--683a--554f--92f7--031e2a26df27-osd--block--3f69fe47--683a--554f--92f7--031e2a26df27'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-97R3N7-2A34-s4Zc-sU9t-FfDM-jVwa-FScsR4', 'scsi-0QEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581', 'scsi-SQEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717227 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ed986be--d550--5e98--86ee--1d899c3b1ca9-osd--block--6ed986be--d550--5e98--86ee--1d899c3b1ca9', 'dm-uuid-LVM-6UuOoFyYjJqfS0KyCKdOA2ZjxgfYaumB0JzZSfzm5xIlni9r9FK3ddaezn5i3pKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79abc216--b4ba--5883--a19f--da26bd64d731-osd--block--79abc216--b4ba--5883--a19f--da26bd64d731', 'dm-uuid-LVM-5u9SY1ubYuxxI9hO0nIhcA10D1Nk4CPofYdDZ1RLD6MDXShcpLtHCCGGR0BEV1H9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717343 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--04cfa187--5820--5d05--93de--747bac6f19c1-osd--block--04cfa187--5820--5d05--93de--747bac6f19c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-03tnYq-6ggS-I1wM-HNLR-s7cp-1W3b-GMFoGB', 'scsi-0QEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c', 'scsi-SQEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717354 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658', 'scsi-SQEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c3fffd7--e076--56d5--815a--37625d7b3693-osd--block--3c3fffd7--e076--56d5--815a--37625d7b3693', 'dm-uuid-LVM-iMc2qlTJtEt456uyjM8G66T1ryw1zEFJCntoOPXDuhR1TXYKYg92dRvX78kvQfdl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part1', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part14', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part15', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part16', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eebf63d4--54bc--5b4a--b141--3683d252bf06-osd--block--eebf63d4--54bc--5b4a--b141--3683d252bf06', 'dm-uuid-LVM-Hf8bPfljEZOuC036yY59Zp1iEAlpSTQeymxFXnO5uCz0J3Xzp4Fl7CDhANxyEEzq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717534 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6ed986be--d550--5e98--86ee--1d899c3b1ca9-osd--block--6ed986be--d550--5e98--86ee--1d899c3b1ca9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5cZ3bI-yphK-DX0j-eq18-5KjP-2qsX-orgfUW', 'scsi-0QEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74', 'scsi-SQEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717572 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--79abc216--b4ba--5883--a19f--da26bd64d731-osd--block--79abc216--b4ba--5883--a19f--da26bd64d731'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-G13f9C-LH5R-OpPS-YXca-r93x-U6vC-ldTAYB', 'scsi-0QEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175', 'scsi-SQEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717603 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.717613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b', 'scsi-SQEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717681 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717775 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.717800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3c3fffd7--e076--56d5--815a--37625d7b3693-osd--block--3c3fffd7--e076--56d5--815a--37625d7b3693'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-flVRpd-9uE6-rKaD-ogui-ysZt-MUo1-sX3ca6', 'scsi-0QEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f', 'scsi-SQEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eebf63d4--54bc--5b4a--b141--3683d252bf06-osd--block--eebf63d4--54bc--5b4a--b141--3683d252bf06'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MDMvGv-SNh8-xR9R-r3OZ-Mfce-jcBT-mY01Ah', 'scsi-0QEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d', 'scsi-SQEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717900 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6', 'scsi-SQEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302', 'scsi-SQEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part1', 'scsi-SQEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part14', 'scsi-SQEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part15', 'scsi-SQEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part16', 'scsi-SQEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.717975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.717996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718055 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.718068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718098 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.718108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22', 'scsi-SQEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.718150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.718160 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.718170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:33:34.718266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254', 'scsi-SQEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part1', 'scsi-SQEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part14', 'scsi-SQEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part15', 'scsi-SQEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part16', 'scsi-SQEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.718291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:33:34.718302 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.718312 | orchestrator | 2025-06-19 10:33:34.718322 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-19 10:33:34.718331 | orchestrator | Thursday 19 June 2025 10:22:50 +0000 (0:00:01.080) 0:00:29.654 ********* 2025-06-19 10:33:34.718342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f69fe47--683a--554f--92f7--031e2a26df27-osd--block--3f69fe47--683a--554f--92f7--031e2a26df27', 'dm-uuid-LVM-3FbwjtgmDfoYI2HFMVZ7etdFItcyZ0uA120tcIDhw0ksPX9thSpC6lMqPATNVSsQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718353 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04cfa187--5820--5d05--93de--747bac6f19c1-osd--block--04cfa187--5820--5d05--93de--747bac6f19c1', 'dm-uuid-LVM-MJpIKKReme2cd0ENNcgCir2ui8Foeckc0XwTlPW1Xwlo0ltlGpmVFKsVBWqITycn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718364 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718393 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ed986be--d550--5e98--86ee--1d899c3b1ca9-osd--block--6ed986be--d550--5e98--86ee--1d899c3b1ca9', 'dm-uuid-LVM-6UuOoFyYjJqfS0KyCKdOA2ZjxgfYaumB0JzZSfzm5xIlni9r9FK3ddaezn5i3pKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718532 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718551 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79abc216--b4ba--5883--a19f--da26bd64d731-osd--block--79abc216--b4ba--5883--a19f--da26bd64d731', 'dm-uuid-LVM-5u9SY1ubYuxxI9hO0nIhcA10D1Nk4CPofYdDZ1RLD6MDXShcpLtHCCGGR0BEV1H9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718572 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718583 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718594 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718612 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718633 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3f69fe47--683a--554f--92f7--031e2a26df27-osd--block--3f69fe47--683a--554f--92f7--031e2a26df27'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-97R3N7-2A34-s4Zc-sU9t-FfDM-jVwa-FScsR4', 'scsi-0QEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581', 'scsi-SQEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718644 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718654 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--04cfa187--5820--5d05--93de--747bac6f19c1-osd--block--04cfa187--5820--5d05--93de--747bac6f19c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-03tnYq-6ggS-I1wM-HNLR-s7cp-1W3b-GMFoGB', 'scsi-0QEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c', 'scsi-SQEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718680 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718694 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718710 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658', 'scsi-SQEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718720 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718730 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718740 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718756 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c3fffd7--e076--56d5--815a--37625d7b3693-osd--block--3c3fffd7--e076--56d5--815a--37625d7b3693', 'dm-uuid-LVM-iMc2qlTJtEt456uyjM8G66T1ryw1zEFJCntoOPXDuhR1TXYKYg92dRvX78kvQfdl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718779 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part1', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part14', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part15', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part16', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718791 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eebf63d4--54bc--5b4a--b141--3683d252bf06-osd--block--eebf63d4--54bc--5b4a--b141--3683d252bf06', 'dm-uuid-LVM-Hf8bPfljEZOuC036yY59Zp1iEAlpSTQeymxFXnO5uCz0J3Xzp4Fl7CDhANxyEEzq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718807 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6ed986be--d550--5e98--86ee--1d899c3b1ca9-osd--block--6ed986be--d550--5e98--86ee--1d899c3b1ca9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5cZ3bI-yphK-DX0j-eq18-5KjP-2qsX-orgfUW', 'scsi-0QEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74', 'scsi-SQEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718834 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718942 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--79abc216--b4ba--5883--a19f--da26bd64d731-osd--block--79abc216--b4ba--5883--a19f--da26bd64d731'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-G13f9C-LH5R-OpPS-YXca-r93x-U6vC-ldTAYB', 'scsi-0QEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175', 'scsi-SQEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718969 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.718984 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b', 'scsi-SQEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719002 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719012 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719022 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719032 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719048 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.719071 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3c3fffd7--e076--56d5--815a--37625d7b3693-osd--block--3c3fffd7--e076--56d5--815a--37625d7b3693'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-flVRpd-9uE6-rKaD-ogui-ysZt-MUo1-sX3ca6', 'scsi-0QEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f', 'scsi-SQEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eebf63d4--54bc--5b4a--b141--3683d252bf06-osd--block--eebf63d4--54bc--5b4a--b141--3683d252bf06'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MDMvGv-SNh8-xR9R-r3OZ-Mfce-jcBT-mY01Ah', 'scsi-0QEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d', 'scsi-SQEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719109 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719128 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6', 'scsi-SQEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719139 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719149 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719159 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719175 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719185 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719200 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719216 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719226 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719237 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302', 'scsi-SQEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part1', 'scsi-SQEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part14', 'scsi-SQEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part15', 'scsi-SQEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part16', 'scsi-SQEMU_QEMU_HARDDISK_b01ca272-0367-4531-95c0-7a23711a0302-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719263 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719274 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.719284 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719294 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719304 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719319 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719329 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719338 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719359 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719369 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719380 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22', 'scsi-SQEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part1', 'scsi-SQEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part14', 'scsi-SQEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part15', 'scsi-SQEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part16', 'scsi-SQEMU_QEMU_HARDDISK_8b95ea5b-b272-4ba4-9e64-b7a520d8cc22-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719402 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719413 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.719423 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.719432 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.719513 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719524 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719541 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719551 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719561 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719571 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719592 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719603 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719613 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254', 'scsi-SQEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part1', 'scsi-SQEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part14', 'scsi-SQEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part15', 'scsi-SQEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part16', 'scsi-SQEMU_QEMU_HARDDISK_45d6f613-e1c9-4f07-aade-2d2f7c147254-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719635 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:33:34.719646 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.719656 | orchestrator | 2025-06-19 10:33:34.719665 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-19 10:33:34.719675 | orchestrator | Thursday 19 June 2025 10:22:52 +0000 (0:00:01.716) 0:00:31.371 ********* 2025-06-19 10:33:34.719691 | orchestrator | [0;32025-06-19 10:33:34 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:34.719701 | orchestrator | 2025-06-19 10:33:34 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:34.719711 | orchestrator | 2025-06-19 10:33:34 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:34.719721 | orchestrator | 2mok: [testbed-node-3] 2025-06-19 10:33:34.719731 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.719740 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.719750 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.719759 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.719775 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.719784 | orchestrator | 2025-06-19 10:33:34.719794 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-19 10:33:34.719804 | orchestrator | Thursday 19 June 2025 10:22:54 +0000 (0:00:01.980) 0:00:33.355 ********* 2025-06-19 10:33:34.719813 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.719823 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.719832 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.719841 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.719851 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.719860 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.719869 | orchestrator | 2025-06-19 10:33:34.719879 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-19 10:33:34.719888 | orchestrator | Thursday 19 June 2025 10:22:54 +0000 (0:00:00.729) 0:00:34.085 ********* 2025-06-19 10:33:34.719898 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.719908 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.719917 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.719927 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.719936 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.719945 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.719955 | orchestrator | 2025-06-19 10:33:34.719963 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-19 10:33:34.719971 | orchestrator | Thursday 19 June 2025 10:22:55 +0000 (0:00:01.095) 0:00:35.180 ********* 2025-06-19 10:33:34.719979 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.719987 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.719994 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.720002 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.720010 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.720017 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.720025 | orchestrator | 2025-06-19 10:33:34.720033 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-19 10:33:34.720041 | orchestrator | Thursday 19 June 2025 10:22:56 +0000 (0:00:00.896) 0:00:36.077 ********* 2025-06-19 10:33:34.720049 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.720057 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.720064 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.720072 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.720080 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.720087 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.720095 | orchestrator | 2025-06-19 10:33:34.720103 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-19 10:33:34.720111 | orchestrator | Thursday 19 June 2025 10:22:57 +0000 (0:00:00.716) 0:00:36.794 ********* 2025-06-19 10:33:34.720118 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.720126 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.720134 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.720141 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.720149 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.720157 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.720165 | orchestrator | 2025-06-19 10:33:34.720172 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-19 10:33:34.720180 | orchestrator | Thursday 19 June 2025 10:22:58 +0000 (0:00:01.176) 0:00:37.970 ********* 2025-06-19 10:33:34.720188 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-19 10:33:34.720196 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-19 10:33:34.720204 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-19 10:33:34.720212 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-19 10:33:34.720220 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-19 10:33:34.720228 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-19 10:33:34.720235 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-19 10:33:34.720248 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-19 10:33:34.720256 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-19 10:33:34.720264 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-19 10:33:34.720272 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-19 10:33:34.720280 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-19 10:33:34.720287 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-19 10:33:34.720295 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-19 10:33:34.720303 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-19 10:33:34.720311 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-19 10:33:34.720318 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-19 10:33:34.720326 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-19 10:33:34.720334 | orchestrator | 2025-06-19 10:33:34.720342 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-19 10:33:34.720350 | orchestrator | Thursday 19 June 2025 10:23:01 +0000 (0:00:03.072) 0:00:41.043 ********* 2025-06-19 10:33:34.720362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-19 10:33:34.720370 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-19 10:33:34.720378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-19 10:33:34.720386 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.720393 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-19 10:33:34.720406 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-19 10:33:34.720414 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-19 10:33:34.720421 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.720429 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-19 10:33:34.720456 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-19 10:33:34.720470 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-19 10:33:34.720482 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.720490 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-19 10:33:34.720498 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-19 10:33:34.720505 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-19 10:33:34.720513 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.720520 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-19 10:33:34.720528 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-19 10:33:34.720536 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-19 10:33:34.720543 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.720551 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-19 10:33:34.720559 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-19 10:33:34.720566 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-19 10:33:34.720574 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.720582 | orchestrator | 2025-06-19 10:33:34.720590 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-19 10:33:34.720598 | orchestrator | Thursday 19 June 2025 10:23:02 +0000 (0:00:00.766) 0:00:41.809 ********* 2025-06-19 10:33:34.720605 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.720613 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.720621 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.720629 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.720637 | orchestrator | 2025-06-19 10:33:34.720645 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-19 10:33:34.720659 | orchestrator | Thursday 19 June 2025 10:23:03 +0000 (0:00:00.820) 0:00:42.629 ********* 2025-06-19 10:33:34.720667 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.720675 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.720683 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.720691 | orchestrator | 2025-06-19 10:33:34.720831 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-19 10:33:34.720844 | orchestrator | Thursday 19 June 2025 10:23:03 +0000 (0:00:00.421) 0:00:43.050 ********* 2025-06-19 10:33:34.720852 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.720860 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.720867 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.720875 | orchestrator | 2025-06-19 10:33:34.720883 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-19 10:33:34.720891 | orchestrator | Thursday 19 June 2025 10:23:04 +0000 (0:00:00.394) 0:00:43.445 ********* 2025-06-19 10:33:34.720899 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.720906 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.720914 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.720922 | orchestrator | 2025-06-19 10:33:34.720930 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-19 10:33:34.720937 | orchestrator | Thursday 19 June 2025 10:23:04 +0000 (0:00:00.720) 0:00:44.165 ********* 2025-06-19 10:33:34.720945 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.720953 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.720961 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.720968 | orchestrator | 2025-06-19 10:33:34.720976 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-19 10:33:34.720984 | orchestrator | Thursday 19 June 2025 10:23:05 +0000 (0:00:00.329) 0:00:44.495 ********* 2025-06-19 10:33:34.720991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.720999 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.721007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.721014 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.721022 | orchestrator | 2025-06-19 10:33:34.721030 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-19 10:33:34.721038 | orchestrator | Thursday 19 June 2025 10:23:05 +0000 (0:00:00.320) 0:00:44.815 ********* 2025-06-19 10:33:34.721045 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.721053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.721061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.721069 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.721076 | orchestrator | 2025-06-19 10:33:34.721084 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-19 10:33:34.721092 | orchestrator | Thursday 19 June 2025 10:23:05 +0000 (0:00:00.345) 0:00:45.160 ********* 2025-06-19 10:33:34.721100 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.721107 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.721115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.721122 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.721130 | orchestrator | 2025-06-19 10:33:34.721143 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-19 10:33:34.721151 | orchestrator | Thursday 19 June 2025 10:23:06 +0000 (0:00:00.355) 0:00:45.516 ********* 2025-06-19 10:33:34.721159 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.721167 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.721174 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.721182 | orchestrator | 2025-06-19 10:33:34.721190 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-19 10:33:34.721198 | orchestrator | Thursday 19 June 2025 10:23:06 +0000 (0:00:00.380) 0:00:45.896 ********* 2025-06-19 10:33:34.721212 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-19 10:33:34.721221 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-19 10:33:34.721228 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-19 10:33:34.721236 | orchestrator | 2025-06-19 10:33:34.721244 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-19 10:33:34.721251 | orchestrator | Thursday 19 June 2025 10:23:08 +0000 (0:00:01.596) 0:00:47.492 ********* 2025-06-19 10:33:34.721259 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-19 10:33:34.721267 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-19 10:33:34.721275 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-19 10:33:34.721282 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-19 10:33:34.721290 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-19 10:33:34.721299 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-19 10:33:34.721306 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-19 10:33:34.721314 | orchestrator | 2025-06-19 10:33:34.721322 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-19 10:33:34.721330 | orchestrator | Thursday 19 June 2025 10:23:09 +0000 (0:00:00.852) 0:00:48.345 ********* 2025-06-19 10:33:34.721337 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-19 10:33:34.721345 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-19 10:33:34.721353 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-19 10:33:34.721378 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-19 10:33:34.721386 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-19 10:33:34.721394 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-19 10:33:34.721401 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-19 10:33:34.721409 | orchestrator | 2025-06-19 10:33:34.721417 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-19 10:33:34.721468 | orchestrator | Thursday 19 June 2025 10:23:11 +0000 (0:00:02.178) 0:00:50.523 ********* 2025-06-19 10:33:34.721484 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.721499 | orchestrator | 2025-06-19 10:33:34.721511 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-19 10:33:34.721520 | orchestrator | Thursday 19 June 2025 10:23:12 +0000 (0:00:01.274) 0:00:51.797 ********* 2025-06-19 10:33:34.721529 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.721538 | orchestrator | 2025-06-19 10:33:34.721546 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-19 10:33:34.721555 | orchestrator | Thursday 19 June 2025 10:23:13 +0000 (0:00:01.075) 0:00:52.873 ********* 2025-06-19 10:33:34.721563 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.721572 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.721580 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.721589 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.721598 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.721606 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.721615 | orchestrator | 2025-06-19 10:33:34.721623 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-19 10:33:34.721632 | orchestrator | Thursday 19 June 2025 10:23:14 +0000 (0:00:01.230) 0:00:54.103 ********* 2025-06-19 10:33:34.721647 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.721656 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.721665 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.721674 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.721683 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.721691 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.721699 | orchestrator | 2025-06-19 10:33:34.721708 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-19 10:33:34.721717 | orchestrator | Thursday 19 June 2025 10:23:15 +0000 (0:00:01.043) 0:00:55.146 ********* 2025-06-19 10:33:34.721726 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.721734 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.721743 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.721752 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.721760 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.721770 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.721778 | orchestrator | 2025-06-19 10:33:34.721787 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-19 10:33:34.721795 | orchestrator | Thursday 19 June 2025 10:23:16 +0000 (0:00:00.965) 0:00:56.112 ********* 2025-06-19 10:33:34.721803 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.721810 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.721823 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.721831 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.721839 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.721846 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.721854 | orchestrator | 2025-06-19 10:33:34.721862 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-19 10:33:34.721869 | orchestrator | Thursday 19 June 2025 10:23:17 +0000 (0:00:00.695) 0:00:56.808 ********* 2025-06-19 10:33:34.721877 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.721885 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.721892 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.721900 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.721908 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.721915 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.721923 | orchestrator | 2025-06-19 10:33:34.721931 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-19 10:33:34.721939 | orchestrator | Thursday 19 June 2025 10:23:18 +0000 (0:00:01.314) 0:00:58.122 ********* 2025-06-19 10:33:34.721947 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.721954 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.721962 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.721970 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.721977 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.721985 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.721992 | orchestrator | 2025-06-19 10:33:34.722000 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-19 10:33:34.722008 | orchestrator | Thursday 19 June 2025 10:23:19 +0000 (0:00:00.797) 0:00:58.919 ********* 2025-06-19 10:33:34.722042 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.722052 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.722060 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.722067 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.722075 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.722083 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.722090 | orchestrator | 2025-06-19 10:33:34.722098 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-19 10:33:34.722106 | orchestrator | Thursday 19 June 2025 10:23:20 +0000 (0:00:01.023) 0:00:59.942 ********* 2025-06-19 10:33:34.722114 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.722122 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.722130 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.722143 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.722151 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.722159 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.722167 | orchestrator | 2025-06-19 10:33:34.722174 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-19 10:33:34.722182 | orchestrator | Thursday 19 June 2025 10:23:21 +0000 (0:00:01.180) 0:01:01.123 ********* 2025-06-19 10:33:34.722190 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.722198 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.722206 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.722213 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.722221 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.722229 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.722236 | orchestrator | 2025-06-19 10:33:34.722244 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-19 10:33:34.722274 | orchestrator | Thursday 19 June 2025 10:23:23 +0000 (0:00:01.496) 0:01:02.619 ********* 2025-06-19 10:33:34.722283 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.722291 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.722299 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.722307 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.722314 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.722322 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.722330 | orchestrator | 2025-06-19 10:33:34.722338 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-19 10:33:34.722346 | orchestrator | Thursday 19 June 2025 10:23:24 +0000 (0:00:00.867) 0:01:03.487 ********* 2025-06-19 10:33:34.722353 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.722361 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.722369 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.722376 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.722384 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.722392 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.722400 | orchestrator | 2025-06-19 10:33:34.722408 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-19 10:33:34.722416 | orchestrator | Thursday 19 June 2025 10:23:25 +0000 (0:00:01.179) 0:01:04.667 ********* 2025-06-19 10:33:34.722423 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.722431 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.722487 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.722495 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.722503 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.722511 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.722518 | orchestrator | 2025-06-19 10:33:34.722526 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-19 10:33:34.722534 | orchestrator | Thursday 19 June 2025 10:23:26 +0000 (0:00:00.871) 0:01:05.539 ********* 2025-06-19 10:33:34.722542 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.722549 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.722557 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.722565 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.722573 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.722580 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.722588 | orchestrator | 2025-06-19 10:33:34.722596 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-19 10:33:34.722604 | orchestrator | Thursday 19 June 2025 10:23:27 +0000 (0:00:00.797) 0:01:06.336 ********* 2025-06-19 10:33:34.722611 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.722619 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.722627 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.722634 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.722642 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.722650 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.722657 | orchestrator | 2025-06-19 10:33:34.722665 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-19 10:33:34.722679 | orchestrator | Thursday 19 June 2025 10:23:27 +0000 (0:00:00.779) 0:01:07.116 ********* 2025-06-19 10:33:34.722687 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.722702 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.722710 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.722717 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.722723 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.722730 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.722736 | orchestrator | 2025-06-19 10:33:34.722743 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-19 10:33:34.722750 | orchestrator | Thursday 19 June 2025 10:23:28 +0000 (0:00:00.830) 0:01:07.946 ********* 2025-06-19 10:33:34.722756 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.722763 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.722769 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.722776 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.722782 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.722789 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.722795 | orchestrator | 2025-06-19 10:33:34.722802 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-19 10:33:34.722808 | orchestrator | Thursday 19 June 2025 10:23:29 +0000 (0:00:00.995) 0:01:08.944 ********* 2025-06-19 10:33:34.722815 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.722821 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.722828 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.722834 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.722841 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.722847 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.722854 | orchestrator | 2025-06-19 10:33:34.722860 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-19 10:33:34.722867 | orchestrator | Thursday 19 June 2025 10:23:30 +0000 (0:00:01.235) 0:01:10.180 ********* 2025-06-19 10:33:34.722874 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.722880 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.722887 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.722893 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.722899 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.722906 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.722912 | orchestrator | 2025-06-19 10:33:34.722919 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-19 10:33:34.722925 | orchestrator | Thursday 19 June 2025 10:23:31 +0000 (0:00:00.914) 0:01:11.094 ********* 2025-06-19 10:33:34.722932 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.722939 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.722945 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.722952 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.722958 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.722964 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.722971 | orchestrator | 2025-06-19 10:33:34.722978 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-19 10:33:34.722984 | orchestrator | Thursday 19 June 2025 10:23:33 +0000 (0:00:01.399) 0:01:12.494 ********* 2025-06-19 10:33:34.722991 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.722998 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.723004 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.723011 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.723017 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.723024 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.723030 | orchestrator | 2025-06-19 10:33:34.723037 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-19 10:33:34.723062 | orchestrator | Thursday 19 June 2025 10:23:34 +0000 (0:00:01.731) 0:01:14.225 ********* 2025-06-19 10:33:34.723069 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.723076 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.723087 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.723093 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.723100 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.723106 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.723113 | orchestrator | 2025-06-19 10:33:34.723120 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-19 10:33:34.723126 | orchestrator | Thursday 19 June 2025 10:23:37 +0000 (0:00:02.238) 0:01:16.464 ********* 2025-06-19 10:33:34.723133 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.723140 | orchestrator | 2025-06-19 10:33:34.723146 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-19 10:33:34.723153 | orchestrator | Thursday 19 June 2025 10:23:38 +0000 (0:00:01.226) 0:01:17.690 ********* 2025-06-19 10:33:34.723160 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.723166 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.723173 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.723179 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.723186 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.723192 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.723199 | orchestrator | 2025-06-19 10:33:34.723206 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-19 10:33:34.723212 | orchestrator | Thursday 19 June 2025 10:23:39 +0000 (0:00:00.868) 0:01:18.559 ********* 2025-06-19 10:33:34.723219 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.723225 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.723232 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.723238 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.723245 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.723251 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.723258 | orchestrator | 2025-06-19 10:33:34.723264 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-19 10:33:34.723271 | orchestrator | Thursday 19 June 2025 10:23:39 +0000 (0:00:00.649) 0:01:19.209 ********* 2025-06-19 10:33:34.723278 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-19 10:33:34.723284 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-19 10:33:34.723291 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-19 10:33:34.723298 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-19 10:33:34.723308 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-19 10:33:34.723315 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-19 10:33:34.723321 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-19 10:33:34.723328 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-19 10:33:34.723335 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-19 10:33:34.723342 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-19 10:33:34.723348 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-19 10:33:34.723355 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-19 10:33:34.723361 | orchestrator | 2025-06-19 10:33:34.723368 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-19 10:33:34.723374 | orchestrator | Thursday 19 June 2025 10:23:41 +0000 (0:00:01.650) 0:01:20.859 ********* 2025-06-19 10:33:34.723381 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.723388 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.723394 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.723405 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.723412 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.723419 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.723425 | orchestrator | 2025-06-19 10:33:34.723432 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-19 10:33:34.723455 | orchestrator | Thursday 19 June 2025 10:23:42 +0000 (0:00:01.058) 0:01:21.917 ********* 2025-06-19 10:33:34.723462 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.723468 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.723475 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.723481 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.723488 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.723494 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.723501 | orchestrator | 2025-06-19 10:33:34.723508 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-19 10:33:34.723514 | orchestrator | Thursday 19 June 2025 10:23:43 +0000 (0:00:00.809) 0:01:22.727 ********* 2025-06-19 10:33:34.723521 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.723527 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.723534 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.723540 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.723547 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.723553 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.723560 | orchestrator | 2025-06-19 10:33:34.723566 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-19 10:33:34.723573 | orchestrator | Thursday 19 June 2025 10:23:44 +0000 (0:00:01.214) 0:01:23.941 ********* 2025-06-19 10:33:34.723580 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.723586 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.723593 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.723599 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.723623 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.723630 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.723637 | orchestrator | 2025-06-19 10:33:34.723643 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-19 10:33:34.723650 | orchestrator | Thursday 19 June 2025 10:23:45 +0000 (0:00:00.738) 0:01:24.679 ********* 2025-06-19 10:33:34.723657 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.723664 | orchestrator | 2025-06-19 10:33:34.723670 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-19 10:33:34.723677 | orchestrator | Thursday 19 June 2025 10:23:46 +0000 (0:00:01.261) 0:01:25.940 ********* 2025-06-19 10:33:34.723683 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.723690 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.723696 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.723703 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.723709 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.723716 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.723722 | orchestrator | 2025-06-19 10:33:34.723729 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-19 10:33:34.723735 | orchestrator | Thursday 19 June 2025 10:25:31 +0000 (0:01:44.715) 0:03:10.656 ********* 2025-06-19 10:33:34.723742 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-19 10:33:34.723749 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-19 10:33:34.723755 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-19 10:33:34.723762 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.723768 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-19 10:33:34.723775 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-19 10:33:34.723786 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-19 10:33:34.723793 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.723799 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-19 10:33:34.723806 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-19 10:33:34.723812 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-19 10:33:34.723819 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.723826 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-19 10:33:34.723836 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-19 10:33:34.723843 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-19 10:33:34.723850 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.723857 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-19 10:33:34.723863 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-19 10:33:34.723870 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-19 10:33:34.723876 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.723883 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-19 10:33:34.723890 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-19 10:33:34.723896 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-19 10:33:34.723903 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.723909 | orchestrator | 2025-06-19 10:33:34.723916 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-19 10:33:34.723922 | orchestrator | Thursday 19 June 2025 10:25:32 +0000 (0:00:00.615) 0:03:11.272 ********* 2025-06-19 10:33:34.723929 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.723936 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.723942 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.723949 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.723955 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.723962 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.723968 | orchestrator | 2025-06-19 10:33:34.723974 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-19 10:33:34.723981 | orchestrator | Thursday 19 June 2025 10:25:32 +0000 (0:00:00.760) 0:03:12.033 ********* 2025-06-19 10:33:34.723987 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.723994 | orchestrator | 2025-06-19 10:33:34.724000 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-19 10:33:34.724007 | orchestrator | Thursday 19 June 2025 10:25:32 +0000 (0:00:00.151) 0:03:12.184 ********* 2025-06-19 10:33:34.724014 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724020 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724026 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724033 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724039 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724046 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724052 | orchestrator | 2025-06-19 10:33:34.724059 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-19 10:33:34.724065 | orchestrator | Thursday 19 June 2025 10:25:33 +0000 (0:00:00.556) 0:03:12.740 ********* 2025-06-19 10:33:34.724072 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724078 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724085 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724091 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724098 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724104 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724115 | orchestrator | 2025-06-19 10:33:34.724137 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-19 10:33:34.724144 | orchestrator | Thursday 19 June 2025 10:25:34 +0000 (0:00:00.826) 0:03:13.567 ********* 2025-06-19 10:33:34.724151 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724157 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724164 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724170 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724177 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724183 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724190 | orchestrator | 2025-06-19 10:33:34.724196 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-19 10:33:34.724203 | orchestrator | Thursday 19 June 2025 10:25:34 +0000 (0:00:00.591) 0:03:14.158 ********* 2025-06-19 10:33:34.724209 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.724216 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.724222 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.724229 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.724235 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.724242 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.724248 | orchestrator | 2025-06-19 10:33:34.724255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-19 10:33:34.724261 | orchestrator | Thursday 19 June 2025 10:25:37 +0000 (0:00:02.397) 0:03:16.555 ********* 2025-06-19 10:33:34.724268 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.724274 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.724280 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.724287 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.724293 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.724300 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.724306 | orchestrator | 2025-06-19 10:33:34.724313 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-19 10:33:34.724319 | orchestrator | Thursday 19 June 2025 10:25:37 +0000 (0:00:00.643) 0:03:17.199 ********* 2025-06-19 10:33:34.724326 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.724334 | orchestrator | 2025-06-19 10:33:34.724340 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-19 10:33:34.724347 | orchestrator | Thursday 19 June 2025 10:25:39 +0000 (0:00:01.372) 0:03:18.571 ********* 2025-06-19 10:33:34.724353 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724360 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724366 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724373 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724379 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724386 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724392 | orchestrator | 2025-06-19 10:33:34.724399 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-19 10:33:34.724405 | orchestrator | Thursday 19 June 2025 10:25:40 +0000 (0:00:00.976) 0:03:19.548 ********* 2025-06-19 10:33:34.724415 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724422 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724428 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724476 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724490 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724501 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724508 | orchestrator | 2025-06-19 10:33:34.724515 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-19 10:33:34.724522 | orchestrator | Thursday 19 June 2025 10:25:40 +0000 (0:00:00.679) 0:03:20.228 ********* 2025-06-19 10:33:34.724528 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724535 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724541 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724553 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724560 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724566 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724572 | orchestrator | 2025-06-19 10:33:34.724579 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-19 10:33:34.724586 | orchestrator | Thursday 19 June 2025 10:25:41 +0000 (0:00:00.875) 0:03:21.104 ********* 2025-06-19 10:33:34.724592 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724599 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724605 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724611 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724618 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724624 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724631 | orchestrator | 2025-06-19 10:33:34.724637 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-19 10:33:34.724644 | orchestrator | Thursday 19 June 2025 10:25:42 +0000 (0:00:00.472) 0:03:21.576 ********* 2025-06-19 10:33:34.724650 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724657 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724663 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724670 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724676 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724682 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724689 | orchestrator | 2025-06-19 10:33:34.724695 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-19 10:33:34.724702 | orchestrator | Thursday 19 June 2025 10:25:42 +0000 (0:00:00.597) 0:03:22.173 ********* 2025-06-19 10:33:34.724708 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724715 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724721 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724728 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724734 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724741 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724747 | orchestrator | 2025-06-19 10:33:34.724754 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-19 10:33:34.724760 | orchestrator | Thursday 19 June 2025 10:25:43 +0000 (0:00:00.464) 0:03:22.638 ********* 2025-06-19 10:33:34.724767 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724773 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724780 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724786 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724793 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724818 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724826 | orchestrator | 2025-06-19 10:33:34.724832 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-19 10:33:34.724839 | orchestrator | Thursday 19 June 2025 10:25:44 +0000 (0:00:00.665) 0:03:23.303 ********* 2025-06-19 10:33:34.724845 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.724852 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.724858 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.724868 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.724880 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.724887 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.724893 | orchestrator | 2025-06-19 10:33:34.724900 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-19 10:33:34.724906 | orchestrator | Thursday 19 June 2025 10:25:44 +0000 (0:00:00.483) 0:03:23.787 ********* 2025-06-19 10:33:34.724913 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.724919 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.724926 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.724932 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.724939 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.724945 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.724952 | orchestrator | 2025-06-19 10:33:34.724963 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-19 10:33:34.724970 | orchestrator | Thursday 19 June 2025 10:25:45 +0000 (0:00:01.148) 0:03:24.936 ********* 2025-06-19 10:33:34.724976 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.724983 | orchestrator | 2025-06-19 10:33:34.724989 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-19 10:33:34.724996 | orchestrator | Thursday 19 June 2025 10:25:46 +0000 (0:00:01.075) 0:03:26.011 ********* 2025-06-19 10:33:34.725002 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-19 10:33:34.725008 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-19 10:33:34.725014 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-19 10:33:34.725020 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-19 10:33:34.725026 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-19 10:33:34.725032 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-19 10:33:34.725038 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-19 10:33:34.725044 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-19 10:33:34.725050 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-19 10:33:34.725056 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-19 10:33:34.725062 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-19 10:33:34.725072 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-19 10:33:34.725078 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-19 10:33:34.725085 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-19 10:33:34.725091 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-19 10:33:34.725097 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-19 10:33:34.725103 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-19 10:33:34.725109 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-19 10:33:34.725115 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-19 10:33:34.725121 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-19 10:33:34.725127 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-19 10:33:34.725133 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-19 10:33:34.725139 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-19 10:33:34.725145 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-19 10:33:34.725151 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-19 10:33:34.725157 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-19 10:33:34.725163 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-19 10:33:34.725169 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-19 10:33:34.725175 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-19 10:33:34.725181 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-19 10:33:34.725187 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-19 10:33:34.725193 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-19 10:33:34.725199 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-19 10:33:34.725205 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-19 10:33:34.725211 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-19 10:33:34.725217 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-19 10:33:34.725223 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-19 10:33:34.725233 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-19 10:33:34.725239 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-19 10:33:34.725245 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-19 10:33:34.725251 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-19 10:33:34.725257 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-19 10:33:34.725263 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-19 10:33:34.725269 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-19 10:33:34.725291 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-19 10:33:34.725298 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-19 10:33:34.725304 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-19 10:33:34.725310 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-19 10:33:34.725316 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-19 10:33:34.725323 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-19 10:33:34.725329 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-19 10:33:34.725335 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-19 10:33:34.725341 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-19 10:33:34.725347 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-19 10:33:34.725353 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-19 10:33:34.725359 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-19 10:33:34.725365 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-19 10:33:34.725371 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-19 10:33:34.725377 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-19 10:33:34.725383 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-19 10:33:34.725389 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-19 10:33:34.725396 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-19 10:33:34.725402 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-19 10:33:34.725408 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-19 10:33:34.725414 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-19 10:33:34.725420 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-19 10:33:34.725426 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-19 10:33:34.725432 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-19 10:33:34.725454 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-19 10:33:34.725461 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-19 10:33:34.725471 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-19 10:33:34.725477 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-19 10:33:34.725483 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-19 10:33:34.725489 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-19 10:33:34.725495 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-19 10:33:34.725502 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-19 10:33:34.725508 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-19 10:33:34.725521 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-19 10:33:34.725527 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-19 10:33:34.725533 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-19 10:33:34.725539 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-19 10:33:34.725545 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-19 10:33:34.725551 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-19 10:33:34.725557 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-19 10:33:34.725563 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-19 10:33:34.725570 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-19 10:33:34.725576 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-19 10:33:34.725582 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-19 10:33:34.725588 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-19 10:33:34.725594 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-19 10:33:34.725600 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-19 10:33:34.725606 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-19 10:33:34.725612 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-19 10:33:34.725618 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-19 10:33:34.725624 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-19 10:33:34.725630 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-19 10:33:34.725636 | orchestrator | 2025-06-19 10:33:34.725642 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-19 10:33:34.725649 | orchestrator | Thursday 19 June 2025 10:25:53 +0000 (0:00:06.936) 0:03:32.948 ********* 2025-06-19 10:33:34.725655 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.725661 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.725667 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.725689 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.725696 | orchestrator | 2025-06-19 10:33:34.725702 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-19 10:33:34.725708 | orchestrator | Thursday 19 June 2025 10:25:54 +0000 (0:00:01.114) 0:03:34.062 ********* 2025-06-19 10:33:34.725715 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.725721 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.725727 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.725733 | orchestrator | 2025-06-19 10:33:34.725739 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-19 10:33:34.725745 | orchestrator | Thursday 19 June 2025 10:25:55 +0000 (0:00:00.826) 0:03:34.889 ********* 2025-06-19 10:33:34.725752 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.725758 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.725764 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.725770 | orchestrator | 2025-06-19 10:33:34.725776 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-19 10:33:34.725786 | orchestrator | Thursday 19 June 2025 10:25:56 +0000 (0:00:01.170) 0:03:36.059 ********* 2025-06-19 10:33:34.725792 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.725798 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.725805 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.725814 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.725824 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.725831 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.725837 | orchestrator | 2025-06-19 10:33:34.725843 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-19 10:33:34.725849 | orchestrator | Thursday 19 June 2025 10:25:57 +0000 (0:00:00.874) 0:03:36.934 ********* 2025-06-19 10:33:34.725855 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.725861 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.725867 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.725873 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.725879 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.725891 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.725897 | orchestrator | 2025-06-19 10:33:34.725903 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-19 10:33:34.725909 | orchestrator | Thursday 19 June 2025 10:25:58 +0000 (0:00:00.547) 0:03:37.482 ********* 2025-06-19 10:33:34.725916 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.725922 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.725928 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.725934 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.725940 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.725946 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.725952 | orchestrator | 2025-06-19 10:33:34.725958 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-19 10:33:34.725964 | orchestrator | Thursday 19 June 2025 10:25:58 +0000 (0:00:00.704) 0:03:38.186 ********* 2025-06-19 10:33:34.725970 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.725976 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.725982 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.725988 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.725994 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726000 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726006 | orchestrator | 2025-06-19 10:33:34.726012 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-19 10:33:34.726041 | orchestrator | Thursday 19 June 2025 10:25:59 +0000 (0:00:00.557) 0:03:38.744 ********* 2025-06-19 10:33:34.726047 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726053 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726059 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726065 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726071 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726077 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726083 | orchestrator | 2025-06-19 10:33:34.726090 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-19 10:33:34.726096 | orchestrator | Thursday 19 June 2025 10:25:59 +0000 (0:00:00.513) 0:03:39.257 ********* 2025-06-19 10:33:34.726102 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726108 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726114 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726120 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726126 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726132 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726138 | orchestrator | 2025-06-19 10:33:34.726144 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-19 10:33:34.726150 | orchestrator | Thursday 19 June 2025 10:26:00 +0000 (0:00:00.780) 0:03:40.038 ********* 2025-06-19 10:33:34.726156 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726167 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726173 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726179 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726185 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726191 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726197 | orchestrator | 2025-06-19 10:33:34.726203 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-19 10:33:34.726225 | orchestrator | Thursday 19 June 2025 10:26:01 +0000 (0:00:00.536) 0:03:40.574 ********* 2025-06-19 10:33:34.726232 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726238 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726245 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726251 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726257 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726263 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726269 | orchestrator | 2025-06-19 10:33:34.726275 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-19 10:33:34.726281 | orchestrator | Thursday 19 June 2025 10:26:02 +0000 (0:00:00.747) 0:03:41.322 ********* 2025-06-19 10:33:34.726287 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726294 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726300 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726306 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.726312 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.726318 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.726324 | orchestrator | 2025-06-19 10:33:34.726330 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-19 10:33:34.726337 | orchestrator | Thursday 19 June 2025 10:26:04 +0000 (0:00:02.925) 0:03:44.247 ********* 2025-06-19 10:33:34.726343 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.726349 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.726355 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.726361 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726367 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726373 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726379 | orchestrator | 2025-06-19 10:33:34.726386 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-19 10:33:34.726392 | orchestrator | Thursday 19 June 2025 10:26:06 +0000 (0:00:01.173) 0:03:45.421 ********* 2025-06-19 10:33:34.726398 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.726404 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.726410 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.726417 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726423 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726429 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726449 | orchestrator | 2025-06-19 10:33:34.726457 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-19 10:33:34.726463 | orchestrator | Thursday 19 June 2025 10:26:06 +0000 (0:00:00.516) 0:03:45.937 ********* 2025-06-19 10:33:34.726469 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726475 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726481 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726487 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726492 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726499 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726505 | orchestrator | 2025-06-19 10:33:34.726511 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-19 10:33:34.726520 | orchestrator | Thursday 19 June 2025 10:26:07 +0000 (0:00:00.717) 0:03:46.655 ********* 2025-06-19 10:33:34.726526 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.726533 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.726543 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.726549 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726556 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726561 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726567 | orchestrator | 2025-06-19 10:33:34.726574 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-19 10:33:34.726580 | orchestrator | Thursday 19 June 2025 10:26:08 +0000 (0:00:00.733) 0:03:47.388 ********* 2025-06-19 10:33:34.726587 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-19 10:33:34.726596 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-19 10:33:34.726603 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726609 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-19 10:33:34.726616 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-19 10:33:34.726622 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726644 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-19 10:33:34.726652 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-19 10:33:34.726658 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726664 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726671 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726676 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726682 | orchestrator | 2025-06-19 10:33:34.726688 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-19 10:33:34.726695 | orchestrator | Thursday 19 June 2025 10:26:09 +0000 (0:00:01.002) 0:03:48.391 ********* 2025-06-19 10:33:34.726701 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726707 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726713 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726719 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726725 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726731 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726737 | orchestrator | 2025-06-19 10:33:34.726743 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-19 10:33:34.726749 | orchestrator | Thursday 19 June 2025 10:26:09 +0000 (0:00:00.690) 0:03:49.082 ********* 2025-06-19 10:33:34.726760 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726766 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726772 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726778 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726784 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726790 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726796 | orchestrator | 2025-06-19 10:33:34.726802 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-19 10:33:34.726809 | orchestrator | Thursday 19 June 2025 10:26:10 +0000 (0:00:00.770) 0:03:49.853 ********* 2025-06-19 10:33:34.726815 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726821 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726827 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726833 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726839 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726845 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726851 | orchestrator | 2025-06-19 10:33:34.726860 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-19 10:33:34.726867 | orchestrator | Thursday 19 June 2025 10:26:11 +0000 (0:00:00.642) 0:03:50.495 ********* 2025-06-19 10:33:34.726873 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726879 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726885 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726891 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726897 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726903 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726909 | orchestrator | 2025-06-19 10:33:34.726915 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-19 10:33:34.726921 | orchestrator | Thursday 19 June 2025 10:26:11 +0000 (0:00:00.736) 0:03:51.232 ********* 2025-06-19 10:33:34.726927 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.726933 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.726939 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.726945 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.726951 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.726957 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.726963 | orchestrator | 2025-06-19 10:33:34.726969 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-19 10:33:34.726975 | orchestrator | Thursday 19 June 2025 10:26:12 +0000 (0:00:00.613) 0:03:51.846 ********* 2025-06-19 10:33:34.726981 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.726987 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.726993 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.726999 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.727005 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.727011 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.727017 | orchestrator | 2025-06-19 10:33:34.727023 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-19 10:33:34.727029 | orchestrator | Thursday 19 June 2025 10:26:13 +0000 (0:00:00.899) 0:03:52.745 ********* 2025-06-19 10:33:34.727035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.727041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.727047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.727053 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727059 | orchestrator | 2025-06-19 10:33:34.727066 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-19 10:33:34.727072 | orchestrator | Thursday 19 June 2025 10:26:13 +0000 (0:00:00.313) 0:03:53.058 ********* 2025-06-19 10:33:34.727078 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.727084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.727090 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.727100 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727106 | orchestrator | 2025-06-19 10:33:34.727112 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-19 10:33:34.727118 | orchestrator | Thursday 19 June 2025 10:26:14 +0000 (0:00:00.332) 0:03:53.391 ********* 2025-06-19 10:33:34.727124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.727145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.727151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.727158 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727164 | orchestrator | 2025-06-19 10:33:34.727170 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-19 10:33:34.727176 | orchestrator | Thursday 19 June 2025 10:26:14 +0000 (0:00:00.296) 0:03:53.688 ********* 2025-06-19 10:33:34.727182 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.727188 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.727194 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.727200 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.727207 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.727213 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.727219 | orchestrator | 2025-06-19 10:33:34.727225 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-19 10:33:34.727231 | orchestrator | Thursday 19 June 2025 10:26:15 +0000 (0:00:00.844) 0:03:54.533 ********* 2025-06-19 10:33:34.727237 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-19 10:33:34.727243 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-19 10:33:34.727249 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-19 10:33:34.727255 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-19 10:33:34.727261 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.727268 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-19 10:33:34.727273 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.727280 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-19 10:33:34.727286 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.727292 | orchestrator | 2025-06-19 10:33:34.727298 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-19 10:33:34.727304 | orchestrator | Thursday 19 June 2025 10:26:17 +0000 (0:00:01.877) 0:03:56.411 ********* 2025-06-19 10:33:34.727310 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.727316 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.727322 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.727329 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.727335 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.727341 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.727347 | orchestrator | 2025-06-19 10:33:34.727353 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-19 10:33:34.727359 | orchestrator | Thursday 19 June 2025 10:26:19 +0000 (0:00:02.677) 0:03:59.089 ********* 2025-06-19 10:33:34.727365 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.727371 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.727377 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.727383 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.727389 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.727395 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.727401 | orchestrator | 2025-06-19 10:33:34.727411 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-19 10:33:34.727417 | orchestrator | Thursday 19 June 2025 10:26:21 +0000 (0:00:01.327) 0:04:00.417 ********* 2025-06-19 10:33:34.727423 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727429 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.727503 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.727511 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.727522 | orchestrator | 2025-06-19 10:33:34.727528 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-19 10:33:34.727535 | orchestrator | Thursday 19 June 2025 10:26:21 +0000 (0:00:00.753) 0:04:01.171 ********* 2025-06-19 10:33:34.727541 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.727547 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.727553 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.727559 | orchestrator | 2025-06-19 10:33:34.727565 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-19 10:33:34.727571 | orchestrator | Thursday 19 June 2025 10:26:22 +0000 (0:00:00.526) 0:04:01.697 ********* 2025-06-19 10:33:34.727577 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.727583 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.727589 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.727595 | orchestrator | 2025-06-19 10:33:34.727601 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-19 10:33:34.727607 | orchestrator | Thursday 19 June 2025 10:26:23 +0000 (0:00:01.251) 0:04:02.949 ********* 2025-06-19 10:33:34.727613 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-19 10:33:34.727619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-19 10:33:34.727625 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-19 10:33:34.727631 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.727637 | orchestrator | 2025-06-19 10:33:34.727643 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-19 10:33:34.727649 | orchestrator | Thursday 19 June 2025 10:26:24 +0000 (0:00:00.515) 0:04:03.464 ********* 2025-06-19 10:33:34.727655 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.727661 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.727667 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.727673 | orchestrator | 2025-06-19 10:33:34.727679 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-19 10:33:34.727685 | orchestrator | Thursday 19 June 2025 10:26:24 +0000 (0:00:00.405) 0:04:03.870 ********* 2025-06-19 10:33:34.727691 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.727697 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.727703 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.727709 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.727715 | orchestrator | 2025-06-19 10:33:34.727721 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-19 10:33:34.727728 | orchestrator | Thursday 19 June 2025 10:26:25 +0000 (0:00:00.974) 0:04:04.844 ********* 2025-06-19 10:33:34.727734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.727758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.727765 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.727771 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727777 | orchestrator | 2025-06-19 10:33:34.727783 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-19 10:33:34.727789 | orchestrator | Thursday 19 June 2025 10:26:25 +0000 (0:00:00.340) 0:04:05.185 ********* 2025-06-19 10:33:34.727795 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727801 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.727807 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.727813 | orchestrator | 2025-06-19 10:33:34.727819 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-19 10:33:34.727825 | orchestrator | Thursday 19 June 2025 10:26:26 +0000 (0:00:00.277) 0:04:05.462 ********* 2025-06-19 10:33:34.727831 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727837 | orchestrator | 2025-06-19 10:33:34.727842 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-19 10:33:34.727851 | orchestrator | Thursday 19 June 2025 10:26:26 +0000 (0:00:00.193) 0:04:05.655 ********* 2025-06-19 10:33:34.727856 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727862 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.727867 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.727872 | orchestrator | 2025-06-19 10:33:34.727877 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-19 10:33:34.727883 | orchestrator | Thursday 19 June 2025 10:26:26 +0000 (0:00:00.341) 0:04:05.996 ********* 2025-06-19 10:33:34.727888 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727893 | orchestrator | 2025-06-19 10:33:34.727899 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-19 10:33:34.727904 | orchestrator | Thursday 19 June 2025 10:26:27 +0000 (0:00:00.568) 0:04:06.565 ********* 2025-06-19 10:33:34.727909 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727914 | orchestrator | 2025-06-19 10:33:34.727919 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-19 10:33:34.727925 | orchestrator | Thursday 19 June 2025 10:26:27 +0000 (0:00:00.215) 0:04:06.781 ********* 2025-06-19 10:33:34.727930 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727936 | orchestrator | 2025-06-19 10:33:34.727941 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-19 10:33:34.727946 | orchestrator | Thursday 19 June 2025 10:26:27 +0000 (0:00:00.104) 0:04:06.885 ********* 2025-06-19 10:33:34.727951 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727957 | orchestrator | 2025-06-19 10:33:34.727962 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-19 10:33:34.727967 | orchestrator | Thursday 19 June 2025 10:26:27 +0000 (0:00:00.184) 0:04:07.069 ********* 2025-06-19 10:33:34.727973 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.727978 | orchestrator | 2025-06-19 10:33:34.727987 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-19 10:33:34.727992 | orchestrator | Thursday 19 June 2025 10:26:27 +0000 (0:00:00.180) 0:04:07.249 ********* 2025-06-19 10:33:34.727998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.728003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.728008 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.728014 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.728019 | orchestrator | 2025-06-19 10:33:34.728024 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-19 10:33:34.728029 | orchestrator | Thursday 19 June 2025 10:26:28 +0000 (0:00:00.340) 0:04:07.590 ********* 2025-06-19 10:33:34.728035 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.728040 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.728045 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.728051 | orchestrator | 2025-06-19 10:33:34.728056 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-19 10:33:34.728061 | orchestrator | Thursday 19 June 2025 10:26:28 +0000 (0:00:00.241) 0:04:07.831 ********* 2025-06-19 10:33:34.728066 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.728072 | orchestrator | 2025-06-19 10:33:34.728077 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-19 10:33:34.728082 | orchestrator | Thursday 19 June 2025 10:26:28 +0000 (0:00:00.201) 0:04:08.033 ********* 2025-06-19 10:33:34.728087 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.728093 | orchestrator | 2025-06-19 10:33:34.728098 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-19 10:33:34.728103 | orchestrator | Thursday 19 June 2025 10:26:28 +0000 (0:00:00.233) 0:04:08.266 ********* 2025-06-19 10:33:34.728109 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.728114 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.728119 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.728124 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.728133 | orchestrator | 2025-06-19 10:33:34.728139 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-19 10:33:34.728144 | orchestrator | Thursday 19 June 2025 10:26:29 +0000 (0:00:00.799) 0:04:09.065 ********* 2025-06-19 10:33:34.728149 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.728154 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.728160 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.728165 | orchestrator | 2025-06-19 10:33:34.728170 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-19 10:33:34.728176 | orchestrator | Thursday 19 June 2025 10:26:30 +0000 (0:00:00.341) 0:04:09.407 ********* 2025-06-19 10:33:34.728181 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.728186 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.728192 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.728197 | orchestrator | 2025-06-19 10:33:34.728202 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-19 10:33:34.728208 | orchestrator | Thursday 19 June 2025 10:26:31 +0000 (0:00:01.467) 0:04:10.874 ********* 2025-06-19 10:33:34.728225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.728231 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.728236 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.728242 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.728247 | orchestrator | 2025-06-19 10:33:34.728252 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-19 10:33:34.728257 | orchestrator | Thursday 19 June 2025 10:26:32 +0000 (0:00:00.695) 0:04:11.570 ********* 2025-06-19 10:33:34.728263 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.728268 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.728273 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.728279 | orchestrator | 2025-06-19 10:33:34.728284 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-19 10:33:34.728289 | orchestrator | Thursday 19 June 2025 10:26:32 +0000 (0:00:00.482) 0:04:12.052 ********* 2025-06-19 10:33:34.728295 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.728300 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.728305 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.728311 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.728316 | orchestrator | 2025-06-19 10:33:34.728321 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-19 10:33:34.728327 | orchestrator | Thursday 19 June 2025 10:26:33 +0000 (0:00:01.098) 0:04:13.150 ********* 2025-06-19 10:33:34.728332 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.728337 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.728343 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.728348 | orchestrator | 2025-06-19 10:33:34.728353 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-19 10:33:34.728358 | orchestrator | Thursday 19 June 2025 10:26:34 +0000 (0:00:00.379) 0:04:13.529 ********* 2025-06-19 10:33:34.728364 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.728369 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.728374 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.728379 | orchestrator | 2025-06-19 10:33:34.728385 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-19 10:33:34.728390 | orchestrator | Thursday 19 June 2025 10:26:35 +0000 (0:00:01.123) 0:04:14.653 ********* 2025-06-19 10:33:34.728395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.728401 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.728406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.728411 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.728417 | orchestrator | 2025-06-19 10:33:34.728425 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-19 10:33:34.728447 | orchestrator | Thursday 19 June 2025 10:26:36 +0000 (0:00:00.792) 0:04:15.446 ********* 2025-06-19 10:33:34.728457 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.728466 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.728475 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.728483 | orchestrator | 2025-06-19 10:33:34.728488 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-19 10:33:34.728493 | orchestrator | Thursday 19 June 2025 10:26:36 +0000 (0:00:00.484) 0:04:15.931 ********* 2025-06-19 10:33:34.728499 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.728504 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.728509 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.728514 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.728520 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.728525 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.728530 | orchestrator | 2025-06-19 10:33:34.728535 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-19 10:33:34.728541 | orchestrator | Thursday 19 June 2025 10:26:37 +0000 (0:00:00.879) 0:04:16.810 ********* 2025-06-19 10:33:34.728546 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.728551 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.728556 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.728562 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.728567 | orchestrator | 2025-06-19 10:33:34.728573 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-19 10:33:34.728578 | orchestrator | Thursday 19 June 2025 10:26:38 +0000 (0:00:00.883) 0:04:17.694 ********* 2025-06-19 10:33:34.728583 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.728588 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.728594 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.728599 | orchestrator | 2025-06-19 10:33:34.728604 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-19 10:33:34.728609 | orchestrator | Thursday 19 June 2025 10:26:38 +0000 (0:00:00.332) 0:04:18.026 ********* 2025-06-19 10:33:34.728615 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.728620 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.728625 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.728630 | orchestrator | 2025-06-19 10:33:34.728636 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-19 10:33:34.728641 | orchestrator | Thursday 19 June 2025 10:26:40 +0000 (0:00:01.361) 0:04:19.387 ********* 2025-06-19 10:33:34.728646 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-19 10:33:34.728652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-19 10:33:34.728657 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-19 10:33:34.728662 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.728667 | orchestrator | 2025-06-19 10:33:34.728673 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-19 10:33:34.728678 | orchestrator | Thursday 19 June 2025 10:26:40 +0000 (0:00:00.872) 0:04:20.259 ********* 2025-06-19 10:33:34.728683 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.728688 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.728694 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.728699 | orchestrator | 2025-06-19 10:33:34.728719 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-19 10:33:34.728725 | orchestrator | 2025-06-19 10:33:34.728731 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-19 10:33:34.728736 | orchestrator | Thursday 19 June 2025 10:26:41 +0000 (0:00:00.846) 0:04:21.106 ********* 2025-06-19 10:33:34.728741 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.728751 | orchestrator | 2025-06-19 10:33:34.728756 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-19 10:33:34.728762 | orchestrator | Thursday 19 June 2025 10:26:42 +0000 (0:00:00.576) 0:04:21.683 ********* 2025-06-19 10:33:34.728767 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.728772 | orchestrator | 2025-06-19 10:33:34.728778 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-19 10:33:34.728783 | orchestrator | Thursday 19 June 2025 10:26:43 +0000 (0:00:00.705) 0:04:22.388 ********* 2025-06-19 10:33:34.728788 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.728793 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.728799 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.728804 | orchestrator | 2025-06-19 10:33:34.728809 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-19 10:33:34.728814 | orchestrator | Thursday 19 June 2025 10:26:43 +0000 (0:00:00.735) 0:04:23.123 ********* 2025-06-19 10:33:34.728820 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.728829 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.728838 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.728843 | orchestrator | 2025-06-19 10:33:34.728849 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-19 10:33:34.728854 | orchestrator | Thursday 19 June 2025 10:26:44 +0000 (0:00:00.305) 0:04:23.429 ********* 2025-06-19 10:33:34.728859 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.728865 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.728870 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.728875 | orchestrator | 2025-06-19 10:33:34.728880 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-19 10:33:34.728886 | orchestrator | Thursday 19 June 2025 10:26:44 +0000 (0:00:00.297) 0:04:23.726 ********* 2025-06-19 10:33:34.728891 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.728896 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.728902 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.728907 | orchestrator | 2025-06-19 10:33:34.728912 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-19 10:33:34.728918 | orchestrator | Thursday 19 June 2025 10:26:44 +0000 (0:00:00.543) 0:04:24.269 ********* 2025-06-19 10:33:34.728923 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.728928 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.728939 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.728944 | orchestrator | 2025-06-19 10:33:34.728950 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-19 10:33:34.728955 | orchestrator | Thursday 19 June 2025 10:26:45 +0000 (0:00:00.774) 0:04:25.044 ********* 2025-06-19 10:33:34.728960 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.728965 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.728971 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.728976 | orchestrator | 2025-06-19 10:33:34.728981 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-19 10:33:34.728986 | orchestrator | Thursday 19 June 2025 10:26:46 +0000 (0:00:00.333) 0:04:25.377 ********* 2025-06-19 10:33:34.728992 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.728997 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.729002 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.729008 | orchestrator | 2025-06-19 10:33:34.729013 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-19 10:33:34.729018 | orchestrator | Thursday 19 June 2025 10:26:46 +0000 (0:00:00.381) 0:04:25.759 ********* 2025-06-19 10:33:34.729024 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729029 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729034 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729039 | orchestrator | 2025-06-19 10:33:34.729045 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-19 10:33:34.729054 | orchestrator | Thursday 19 June 2025 10:26:47 +0000 (0:00:00.877) 0:04:26.636 ********* 2025-06-19 10:33:34.729059 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729064 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729069 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729075 | orchestrator | 2025-06-19 10:33:34.729080 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-19 10:33:34.729085 | orchestrator | Thursday 19 June 2025 10:26:48 +0000 (0:00:00.681) 0:04:27.318 ********* 2025-06-19 10:33:34.729091 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.729096 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.729101 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.729106 | orchestrator | 2025-06-19 10:33:34.729112 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-19 10:33:34.729117 | orchestrator | Thursday 19 June 2025 10:26:48 +0000 (0:00:00.260) 0:04:27.578 ********* 2025-06-19 10:33:34.729122 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729128 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729133 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729138 | orchestrator | 2025-06-19 10:33:34.729143 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-19 10:33:34.729149 | orchestrator | Thursday 19 June 2025 10:26:48 +0000 (0:00:00.332) 0:04:27.911 ********* 2025-06-19 10:33:34.729154 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.729159 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.729165 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.729170 | orchestrator | 2025-06-19 10:33:34.729175 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-19 10:33:34.729181 | orchestrator | Thursday 19 June 2025 10:26:49 +0000 (0:00:00.442) 0:04:28.353 ********* 2025-06-19 10:33:34.729186 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.729191 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.729210 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.729216 | orchestrator | 2025-06-19 10:33:34.729222 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-19 10:33:34.729227 | orchestrator | Thursday 19 June 2025 10:26:49 +0000 (0:00:00.273) 0:04:28.627 ********* 2025-06-19 10:33:34.729233 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.729238 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.729243 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.729248 | orchestrator | 2025-06-19 10:33:34.729254 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-19 10:33:34.729259 | orchestrator | Thursday 19 June 2025 10:26:49 +0000 (0:00:00.274) 0:04:28.902 ********* 2025-06-19 10:33:34.729264 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.729269 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.729275 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.729280 | orchestrator | 2025-06-19 10:33:34.729285 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-19 10:33:34.729291 | orchestrator | Thursday 19 June 2025 10:26:49 +0000 (0:00:00.257) 0:04:29.159 ********* 2025-06-19 10:33:34.729296 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.729301 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.729306 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.729312 | orchestrator | 2025-06-19 10:33:34.729317 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-19 10:33:34.729322 | orchestrator | Thursday 19 June 2025 10:26:50 +0000 (0:00:00.564) 0:04:29.724 ********* 2025-06-19 10:33:34.729328 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729333 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729338 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729343 | orchestrator | 2025-06-19 10:33:34.729349 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-19 10:33:34.729354 | orchestrator | Thursday 19 June 2025 10:26:50 +0000 (0:00:00.335) 0:04:30.059 ********* 2025-06-19 10:33:34.729363 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729368 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729374 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729379 | orchestrator | 2025-06-19 10:33:34.729384 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-19 10:33:34.729390 | orchestrator | Thursday 19 June 2025 10:26:51 +0000 (0:00:00.343) 0:04:30.403 ********* 2025-06-19 10:33:34.729395 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729400 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729406 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729411 | orchestrator | 2025-06-19 10:33:34.729416 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-19 10:33:34.729421 | orchestrator | Thursday 19 June 2025 10:26:51 +0000 (0:00:00.770) 0:04:31.174 ********* 2025-06-19 10:33:34.729427 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729432 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729470 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729476 | orchestrator | 2025-06-19 10:33:34.729484 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-19 10:33:34.729490 | orchestrator | Thursday 19 June 2025 10:26:52 +0000 (0:00:00.341) 0:04:31.515 ********* 2025-06-19 10:33:34.729495 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.729501 | orchestrator | 2025-06-19 10:33:34.729506 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-19 10:33:34.729511 | orchestrator | Thursday 19 June 2025 10:26:52 +0000 (0:00:00.577) 0:04:32.093 ********* 2025-06-19 10:33:34.729517 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.729522 | orchestrator | 2025-06-19 10:33:34.729527 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-19 10:33:34.729533 | orchestrator | Thursday 19 June 2025 10:26:52 +0000 (0:00:00.162) 0:04:32.256 ********* 2025-06-19 10:33:34.729538 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-19 10:33:34.729543 | orchestrator | 2025-06-19 10:33:34.729549 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-19 10:33:34.729554 | orchestrator | Thursday 19 June 2025 10:26:54 +0000 (0:00:01.024) 0:04:33.280 ********* 2025-06-19 10:33:34.729559 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729565 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729570 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729575 | orchestrator | 2025-06-19 10:33:34.729581 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-19 10:33:34.729586 | orchestrator | Thursday 19 June 2025 10:26:54 +0000 (0:00:00.664) 0:04:33.944 ********* 2025-06-19 10:33:34.729591 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729596 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729602 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729607 | orchestrator | 2025-06-19 10:33:34.729612 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-19 10:33:34.729618 | orchestrator | Thursday 19 June 2025 10:26:55 +0000 (0:00:00.427) 0:04:34.372 ********* 2025-06-19 10:33:34.729623 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.729628 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.729634 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.729639 | orchestrator | 2025-06-19 10:33:34.729644 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-19 10:33:34.729650 | orchestrator | Thursday 19 June 2025 10:26:56 +0000 (0:00:01.224) 0:04:35.597 ********* 2025-06-19 10:33:34.729655 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.729660 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.729666 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.729671 | orchestrator | 2025-06-19 10:33:34.729676 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-19 10:33:34.729682 | orchestrator | Thursday 19 June 2025 10:26:57 +0000 (0:00:00.734) 0:04:36.332 ********* 2025-06-19 10:33:34.729691 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.729696 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.729702 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.729707 | orchestrator | 2025-06-19 10:33:34.729712 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-19 10:33:34.729732 | orchestrator | Thursday 19 June 2025 10:26:58 +0000 (0:00:00.956) 0:04:37.288 ********* 2025-06-19 10:33:34.729738 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729744 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729749 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729754 | orchestrator | 2025-06-19 10:33:34.729759 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-19 10:33:34.729765 | orchestrator | Thursday 19 June 2025 10:26:58 +0000 (0:00:00.659) 0:04:37.947 ********* 2025-06-19 10:33:34.729770 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.729776 | orchestrator | 2025-06-19 10:33:34.729781 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-19 10:33:34.729786 | orchestrator | Thursday 19 June 2025 10:26:59 +0000 (0:00:01.292) 0:04:39.240 ********* 2025-06-19 10:33:34.729791 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729797 | orchestrator | 2025-06-19 10:33:34.729802 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-19 10:33:34.729808 | orchestrator | Thursday 19 June 2025 10:27:00 +0000 (0:00:00.753) 0:04:39.993 ********* 2025-06-19 10:33:34.729813 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-19 10:33:34.729818 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.729824 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.729829 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-19 10:33:34.729835 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-19 10:33:34.729840 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-19 10:33:34.729845 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-19 10:33:34.729851 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-19 10:33:34.729856 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-19 10:33:34.729861 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-19 10:33:34.729867 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-19 10:33:34.729872 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-19 10:33:34.729878 | orchestrator | 2025-06-19 10:33:34.729883 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-19 10:33:34.729888 | orchestrator | Thursday 19 June 2025 10:27:03 +0000 (0:00:03.194) 0:04:43.188 ********* 2025-06-19 10:33:34.729894 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.729899 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.729904 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.729910 | orchestrator | 2025-06-19 10:33:34.729915 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-19 10:33:34.729920 | orchestrator | Thursday 19 June 2025 10:27:05 +0000 (0:00:01.402) 0:04:44.591 ********* 2025-06-19 10:33:34.729926 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729931 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729939 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729944 | orchestrator | 2025-06-19 10:33:34.729949 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-19 10:33:34.729954 | orchestrator | Thursday 19 June 2025 10:27:05 +0000 (0:00:00.287) 0:04:44.878 ********* 2025-06-19 10:33:34.729959 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.729964 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.729968 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.729973 | orchestrator | 2025-06-19 10:33:34.729978 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-19 10:33:34.729986 | orchestrator | Thursday 19 June 2025 10:27:05 +0000 (0:00:00.270) 0:04:45.149 ********* 2025-06-19 10:33:34.729991 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.729996 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.730001 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.730005 | orchestrator | 2025-06-19 10:33:34.730010 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-19 10:33:34.730029 | orchestrator | Thursday 19 June 2025 10:27:07 +0000 (0:00:01.389) 0:04:46.538 ********* 2025-06-19 10:33:34.730035 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.730040 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.730045 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.730050 | orchestrator | 2025-06-19 10:33:34.730055 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-19 10:33:34.730059 | orchestrator | Thursday 19 June 2025 10:27:08 +0000 (0:00:01.352) 0:04:47.891 ********* 2025-06-19 10:33:34.730064 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730069 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730074 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730078 | orchestrator | 2025-06-19 10:33:34.730083 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-19 10:33:34.730088 | orchestrator | Thursday 19 June 2025 10:27:08 +0000 (0:00:00.269) 0:04:48.160 ********* 2025-06-19 10:33:34.730093 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.730097 | orchestrator | 2025-06-19 10:33:34.730102 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-19 10:33:34.730107 | orchestrator | Thursday 19 June 2025 10:27:09 +0000 (0:00:00.527) 0:04:48.688 ********* 2025-06-19 10:33:34.730112 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730117 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730122 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730126 | orchestrator | 2025-06-19 10:33:34.730131 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-19 10:33:34.730136 | orchestrator | Thursday 19 June 2025 10:27:09 +0000 (0:00:00.487) 0:04:49.176 ********* 2025-06-19 10:33:34.730141 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730145 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730150 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730155 | orchestrator | 2025-06-19 10:33:34.730160 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-19 10:33:34.730164 | orchestrator | Thursday 19 June 2025 10:27:10 +0000 (0:00:00.308) 0:04:49.484 ********* 2025-06-19 10:33:34.730182 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.730188 | orchestrator | 2025-06-19 10:33:34.730193 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-19 10:33:34.730198 | orchestrator | Thursday 19 June 2025 10:27:10 +0000 (0:00:00.474) 0:04:49.959 ********* 2025-06-19 10:33:34.730202 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.730207 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.730212 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.730216 | orchestrator | 2025-06-19 10:33:34.730221 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-19 10:33:34.730226 | orchestrator | Thursday 19 June 2025 10:27:12 +0000 (0:00:01.771) 0:04:51.730 ********* 2025-06-19 10:33:34.730231 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.730235 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.730240 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.730245 | orchestrator | 2025-06-19 10:33:34.730250 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-19 10:33:34.730255 | orchestrator | Thursday 19 June 2025 10:27:13 +0000 (0:00:01.412) 0:04:53.143 ********* 2025-06-19 10:33:34.730263 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.730268 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.730272 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.730277 | orchestrator | 2025-06-19 10:33:34.730282 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-19 10:33:34.730287 | orchestrator | Thursday 19 June 2025 10:27:15 +0000 (0:00:01.693) 0:04:54.837 ********* 2025-06-19 10:33:34.730291 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.730296 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.730301 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.730305 | orchestrator | 2025-06-19 10:33:34.730310 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-19 10:33:34.730315 | orchestrator | Thursday 19 June 2025 10:27:17 +0000 (0:00:02.098) 0:04:56.935 ********* 2025-06-19 10:33:34.730320 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.730325 | orchestrator | 2025-06-19 10:33:34.730330 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-19 10:33:34.730334 | orchestrator | Thursday 19 June 2025 10:27:18 +0000 (0:00:00.769) 0:04:57.704 ********* 2025-06-19 10:33:34.730339 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.730344 | orchestrator | 2025-06-19 10:33:34.730349 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-19 10:33:34.730353 | orchestrator | Thursday 19 June 2025 10:27:19 +0000 (0:00:01.242) 0:04:58.947 ********* 2025-06-19 10:33:34.730358 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.730363 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.730368 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.730372 | orchestrator | 2025-06-19 10:33:34.730380 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-19 10:33:34.730385 | orchestrator | Thursday 19 June 2025 10:27:28 +0000 (0:00:09.163) 0:05:08.110 ********* 2025-06-19 10:33:34.730390 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730394 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730399 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730404 | orchestrator | 2025-06-19 10:33:34.730409 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-19 10:33:34.730413 | orchestrator | Thursday 19 June 2025 10:27:29 +0000 (0:00:00.342) 0:05:08.453 ********* 2025-06-19 10:33:34.730419 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__32d22189b5c7132d2df0157f21abf5e92dfae224'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-19 10:33:34.730425 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__32d22189b5c7132d2df0157f21abf5e92dfae224'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-19 10:33:34.730431 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__32d22189b5c7132d2df0157f21abf5e92dfae224'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-19 10:33:34.730453 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__32d22189b5c7132d2df0157f21abf5e92dfae224'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-19 10:33:34.730474 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__32d22189b5c7132d2df0157f21abf5e92dfae224'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-19 10:33:34.730480 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__32d22189b5c7132d2df0157f21abf5e92dfae224'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__32d22189b5c7132d2df0157f21abf5e92dfae224'}])  2025-06-19 10:33:34.730486 | orchestrator | 2025-06-19 10:33:34.730491 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-19 10:33:34.730496 | orchestrator | Thursday 19 June 2025 10:27:43 +0000 (0:00:14.574) 0:05:23.028 ********* 2025-06-19 10:33:34.730501 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730505 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730510 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730515 | orchestrator | 2025-06-19 10:33:34.730520 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-19 10:33:34.730524 | orchestrator | Thursday 19 June 2025 10:27:44 +0000 (0:00:00.404) 0:05:23.432 ********* 2025-06-19 10:33:34.730529 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.730534 | orchestrator | 2025-06-19 10:33:34.730539 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-19 10:33:34.730543 | orchestrator | Thursday 19 June 2025 10:27:44 +0000 (0:00:00.735) 0:05:24.167 ********* 2025-06-19 10:33:34.730548 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.730553 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.730558 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.730562 | orchestrator | 2025-06-19 10:33:34.730567 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-19 10:33:34.730572 | orchestrator | Thursday 19 June 2025 10:27:45 +0000 (0:00:00.330) 0:05:24.498 ********* 2025-06-19 10:33:34.730577 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730582 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730586 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730591 | orchestrator | 2025-06-19 10:33:34.730596 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-19 10:33:34.730601 | orchestrator | Thursday 19 June 2025 10:27:45 +0000 (0:00:00.332) 0:05:24.830 ********* 2025-06-19 10:33:34.730605 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-19 10:33:34.730613 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-19 10:33:34.730618 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-19 10:33:34.730623 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730627 | orchestrator | 2025-06-19 10:33:34.730632 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-19 10:33:34.730637 | orchestrator | Thursday 19 June 2025 10:27:46 +0000 (0:00:00.838) 0:05:25.669 ********* 2025-06-19 10:33:34.730642 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.730646 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.730651 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.730656 | orchestrator | 2025-06-19 10:33:34.730661 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-19 10:33:34.730665 | orchestrator | 2025-06-19 10:33:34.730670 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-19 10:33:34.730675 | orchestrator | Thursday 19 June 2025 10:27:47 +0000 (0:00:00.778) 0:05:26.448 ********* 2025-06-19 10:33:34.730683 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.730688 | orchestrator | 2025-06-19 10:33:34.730693 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-19 10:33:34.730697 | orchestrator | Thursday 19 June 2025 10:27:47 +0000 (0:00:00.487) 0:05:26.936 ********* 2025-06-19 10:33:34.730702 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.730707 | orchestrator | 2025-06-19 10:33:34.730712 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-19 10:33:34.730716 | orchestrator | Thursday 19 June 2025 10:27:48 +0000 (0:00:00.700) 0:05:27.636 ********* 2025-06-19 10:33:34.730721 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.730726 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.730731 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.730735 | orchestrator | 2025-06-19 10:33:34.730740 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-19 10:33:34.730745 | orchestrator | Thursday 19 June 2025 10:27:49 +0000 (0:00:00.756) 0:05:28.393 ********* 2025-06-19 10:33:34.730750 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730754 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730759 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730764 | orchestrator | 2025-06-19 10:33:34.730769 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-19 10:33:34.730774 | orchestrator | Thursday 19 June 2025 10:27:49 +0000 (0:00:00.325) 0:05:28.719 ********* 2025-06-19 10:33:34.730778 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730783 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730788 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730792 | orchestrator | 2025-06-19 10:33:34.730797 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-19 10:33:34.730802 | orchestrator | Thursday 19 June 2025 10:27:49 +0000 (0:00:00.317) 0:05:29.036 ********* 2025-06-19 10:33:34.730807 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730812 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730828 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730833 | orchestrator | 2025-06-19 10:33:34.730838 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-19 10:33:34.730843 | orchestrator | Thursday 19 June 2025 10:27:50 +0000 (0:00:00.571) 0:05:29.607 ********* 2025-06-19 10:33:34.730848 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.730852 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.730857 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.730862 | orchestrator | 2025-06-19 10:33:34.730866 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-19 10:33:34.730871 | orchestrator | Thursday 19 June 2025 10:27:51 +0000 (0:00:00.750) 0:05:30.358 ********* 2025-06-19 10:33:34.730876 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730881 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730886 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730890 | orchestrator | 2025-06-19 10:33:34.730895 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-19 10:33:34.730900 | orchestrator | Thursday 19 June 2025 10:27:51 +0000 (0:00:00.347) 0:05:30.705 ********* 2025-06-19 10:33:34.730905 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730909 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.730914 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.730919 | orchestrator | 2025-06-19 10:33:34.730923 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-19 10:33:34.730928 | orchestrator | Thursday 19 June 2025 10:27:51 +0000 (0:00:00.313) 0:05:31.018 ********* 2025-06-19 10:33:34.730933 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.730938 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.730942 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.730950 | orchestrator | 2025-06-19 10:33:34.730955 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-19 10:33:34.730960 | orchestrator | Thursday 19 June 2025 10:27:52 +0000 (0:00:01.007) 0:05:32.026 ********* 2025-06-19 10:33:34.730965 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.730970 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.730974 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.730979 | orchestrator | 2025-06-19 10:33:34.730984 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-19 10:33:34.730988 | orchestrator | Thursday 19 June 2025 10:27:53 +0000 (0:00:00.756) 0:05:32.783 ********* 2025-06-19 10:33:34.730993 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.730998 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731003 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731007 | orchestrator | 2025-06-19 10:33:34.731012 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-19 10:33:34.731017 | orchestrator | Thursday 19 June 2025 10:27:53 +0000 (0:00:00.324) 0:05:33.108 ********* 2025-06-19 10:33:34.731022 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.731026 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.731031 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.731036 | orchestrator | 2025-06-19 10:33:34.731043 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-19 10:33:34.731048 | orchestrator | Thursday 19 June 2025 10:27:54 +0000 (0:00:00.330) 0:05:33.438 ********* 2025-06-19 10:33:34.731053 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731058 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731063 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731067 | orchestrator | 2025-06-19 10:33:34.731072 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-19 10:33:34.731077 | orchestrator | Thursday 19 June 2025 10:27:54 +0000 (0:00:00.298) 0:05:33.737 ********* 2025-06-19 10:33:34.731082 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731086 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731091 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731096 | orchestrator | 2025-06-19 10:33:34.731101 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-19 10:33:34.731105 | orchestrator | Thursday 19 June 2025 10:27:55 +0000 (0:00:00.536) 0:05:34.274 ********* 2025-06-19 10:33:34.731111 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731115 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731120 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731125 | orchestrator | 2025-06-19 10:33:34.731129 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-19 10:33:34.731134 | orchestrator | Thursday 19 June 2025 10:27:55 +0000 (0:00:00.324) 0:05:34.598 ********* 2025-06-19 10:33:34.731139 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731144 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731149 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731153 | orchestrator | 2025-06-19 10:33:34.731158 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-19 10:33:34.731163 | orchestrator | Thursday 19 June 2025 10:27:55 +0000 (0:00:00.361) 0:05:34.959 ********* 2025-06-19 10:33:34.731168 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731172 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731177 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731182 | orchestrator | 2025-06-19 10:33:34.731187 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-19 10:33:34.731191 | orchestrator | Thursday 19 June 2025 10:27:56 +0000 (0:00:00.326) 0:05:35.286 ********* 2025-06-19 10:33:34.731196 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.731201 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.731206 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.731210 | orchestrator | 2025-06-19 10:33:34.731215 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-19 10:33:34.731223 | orchestrator | Thursday 19 June 2025 10:27:56 +0000 (0:00:00.601) 0:05:35.888 ********* 2025-06-19 10:33:34.731228 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.731233 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.731237 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.731242 | orchestrator | 2025-06-19 10:33:34.731247 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-19 10:33:34.731251 | orchestrator | Thursday 19 June 2025 10:27:56 +0000 (0:00:00.324) 0:05:36.212 ********* 2025-06-19 10:33:34.731256 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.731261 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.731266 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.731270 | orchestrator | 2025-06-19 10:33:34.731286 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-19 10:33:34.731291 | orchestrator | Thursday 19 June 2025 10:27:57 +0000 (0:00:00.587) 0:05:36.800 ********* 2025-06-19 10:33:34.731296 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-19 10:33:34.731301 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-19 10:33:34.731306 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-19 10:33:34.731311 | orchestrator | 2025-06-19 10:33:34.731315 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-19 10:33:34.731320 | orchestrator | Thursday 19 June 2025 10:27:58 +0000 (0:00:01.105) 0:05:37.906 ********* 2025-06-19 10:33:34.731325 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.731330 | orchestrator | 2025-06-19 10:33:34.731334 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-19 10:33:34.731339 | orchestrator | Thursday 19 June 2025 10:27:59 +0000 (0:00:00.493) 0:05:38.399 ********* 2025-06-19 10:33:34.731344 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.731349 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.731354 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.731358 | orchestrator | 2025-06-19 10:33:34.731363 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-19 10:33:34.731368 | orchestrator | Thursday 19 June 2025 10:27:59 +0000 (0:00:00.664) 0:05:39.064 ********* 2025-06-19 10:33:34.731373 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731378 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731382 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731387 | orchestrator | 2025-06-19 10:33:34.731392 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-19 10:33:34.731396 | orchestrator | Thursday 19 June 2025 10:28:00 +0000 (0:00:00.314) 0:05:39.379 ********* 2025-06-19 10:33:34.731401 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-19 10:33:34.731406 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-19 10:33:34.731411 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-19 10:33:34.731415 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-19 10:33:34.731420 | orchestrator | 2025-06-19 10:33:34.731425 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-19 10:33:34.731430 | orchestrator | Thursday 19 June 2025 10:28:10 +0000 (0:00:10.677) 0:05:50.056 ********* 2025-06-19 10:33:34.731448 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.731453 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.731458 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.731463 | orchestrator | 2025-06-19 10:33:34.731467 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-19 10:33:34.731477 | orchestrator | Thursday 19 June 2025 10:28:11 +0000 (0:00:00.340) 0:05:50.397 ********* 2025-06-19 10:33:34.731482 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-19 10:33:34.731487 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-19 10:33:34.731495 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-19 10:33:34.731500 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-19 10:33:34.731504 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.731509 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.731514 | orchestrator | 2025-06-19 10:33:34.731519 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-19 10:33:34.731523 | orchestrator | Thursday 19 June 2025 10:28:13 +0000 (0:00:02.135) 0:05:52.533 ********* 2025-06-19 10:33:34.731528 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-19 10:33:34.731533 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-19 10:33:34.731537 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-19 10:33:34.731542 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-19 10:33:34.731547 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-19 10:33:34.731551 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-19 10:33:34.731556 | orchestrator | 2025-06-19 10:33:34.731561 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-19 10:33:34.731566 | orchestrator | Thursday 19 June 2025 10:28:14 +0000 (0:00:01.251) 0:05:53.784 ********* 2025-06-19 10:33:34.731570 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.731575 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.731580 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.731584 | orchestrator | 2025-06-19 10:33:34.731589 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-19 10:33:34.731594 | orchestrator | Thursday 19 June 2025 10:28:15 +0000 (0:00:00.976) 0:05:54.760 ********* 2025-06-19 10:33:34.731599 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731603 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731608 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731612 | orchestrator | 2025-06-19 10:33:34.731617 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-19 10:33:34.731622 | orchestrator | Thursday 19 June 2025 10:28:15 +0000 (0:00:00.312) 0:05:55.072 ********* 2025-06-19 10:33:34.731627 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731631 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731636 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731641 | orchestrator | 2025-06-19 10:33:34.731645 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-19 10:33:34.731650 | orchestrator | Thursday 19 June 2025 10:28:16 +0000 (0:00:00.316) 0:05:55.389 ********* 2025-06-19 10:33:34.731655 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.731659 | orchestrator | 2025-06-19 10:33:34.731664 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-19 10:33:34.731669 | orchestrator | Thursday 19 June 2025 10:28:16 +0000 (0:00:00.539) 0:05:55.928 ********* 2025-06-19 10:33:34.731685 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731691 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731695 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731700 | orchestrator | 2025-06-19 10:33:34.731705 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-19 10:33:34.731710 | orchestrator | Thursday 19 June 2025 10:28:17 +0000 (0:00:00.601) 0:05:56.529 ********* 2025-06-19 10:33:34.731714 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731719 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731724 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.731728 | orchestrator | 2025-06-19 10:33:34.731733 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-19 10:33:34.731738 | orchestrator | Thursday 19 June 2025 10:28:17 +0000 (0:00:00.325) 0:05:56.855 ********* 2025-06-19 10:33:34.731742 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.731751 | orchestrator | 2025-06-19 10:33:34.731756 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-19 10:33:34.731760 | orchestrator | Thursday 19 June 2025 10:28:18 +0000 (0:00:00.471) 0:05:57.326 ********* 2025-06-19 10:33:34.731765 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.731770 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.731774 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.731779 | orchestrator | 2025-06-19 10:33:34.731784 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-19 10:33:34.731788 | orchestrator | Thursday 19 June 2025 10:28:19 +0000 (0:00:01.378) 0:05:58.705 ********* 2025-06-19 10:33:34.731793 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.731798 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.731803 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.731807 | orchestrator | 2025-06-19 10:33:34.731812 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-19 10:33:34.731817 | orchestrator | Thursday 19 June 2025 10:28:20 +0000 (0:00:01.130) 0:05:59.835 ********* 2025-06-19 10:33:34.731821 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.731826 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.731831 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.731835 | orchestrator | 2025-06-19 10:33:34.731840 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-19 10:33:34.731845 | orchestrator | Thursday 19 June 2025 10:28:22 +0000 (0:00:01.675) 0:06:01.510 ********* 2025-06-19 10:33:34.731849 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.731854 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.731859 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.731863 | orchestrator | 2025-06-19 10:33:34.731868 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-19 10:33:34.731873 | orchestrator | Thursday 19 June 2025 10:28:24 +0000 (0:00:02.188) 0:06:03.699 ********* 2025-06-19 10:33:34.731880 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.731885 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.731890 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-19 10:33:34.731895 | orchestrator | 2025-06-19 10:33:34.731900 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-19 10:33:34.731904 | orchestrator | Thursday 19 June 2025 10:28:25 +0000 (0:00:00.741) 0:06:04.440 ********* 2025-06-19 10:33:34.731909 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-19 10:33:34.731914 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-19 10:33:34.731919 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-19 10:33:34.731923 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-19 10:33:34.731928 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:33:34.731933 | orchestrator | 2025-06-19 10:33:34.731938 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-19 10:33:34.731942 | orchestrator | Thursday 19 June 2025 10:28:49 +0000 (0:00:24.127) 0:06:28.568 ********* 2025-06-19 10:33:34.731947 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:33:34.731952 | orchestrator | 2025-06-19 10:33:34.731957 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-19 10:33:34.731961 | orchestrator | Thursday 19 June 2025 10:28:50 +0000 (0:00:01.189) 0:06:29.758 ********* 2025-06-19 10:33:34.731966 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.731971 | orchestrator | 2025-06-19 10:33:34.731975 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-19 10:33:34.731980 | orchestrator | Thursday 19 June 2025 10:28:50 +0000 (0:00:00.330) 0:06:30.088 ********* 2025-06-19 10:33:34.731988 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.731993 | orchestrator | 2025-06-19 10:33:34.731997 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-19 10:33:34.732002 | orchestrator | Thursday 19 June 2025 10:28:50 +0000 (0:00:00.161) 0:06:30.250 ********* 2025-06-19 10:33:34.732007 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-19 10:33:34.732012 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-19 10:33:34.732016 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-19 10:33:34.732021 | orchestrator | 2025-06-19 10:33:34.732026 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-19 10:33:34.732030 | orchestrator | Thursday 19 June 2025 10:28:57 +0000 (0:00:06.391) 0:06:36.641 ********* 2025-06-19 10:33:34.732035 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-19 10:33:34.732050 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-19 10:33:34.732056 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-19 10:33:34.732061 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-19 10:33:34.732065 | orchestrator | 2025-06-19 10:33:34.732070 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-19 10:33:34.732075 | orchestrator | Thursday 19 June 2025 10:29:02 +0000 (0:00:05.029) 0:06:41.671 ********* 2025-06-19 10:33:34.732080 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.732084 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.732089 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.732094 | orchestrator | 2025-06-19 10:33:34.732098 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-19 10:33:34.732103 | orchestrator | Thursday 19 June 2025 10:29:03 +0000 (0:00:00.631) 0:06:42.302 ********* 2025-06-19 10:33:34.732108 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.732113 | orchestrator | 2025-06-19 10:33:34.732117 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-19 10:33:34.732122 | orchestrator | Thursday 19 June 2025 10:29:03 +0000 (0:00:00.603) 0:06:42.905 ********* 2025-06-19 10:33:34.732127 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.732131 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.732136 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.732141 | orchestrator | 2025-06-19 10:33:34.732146 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-19 10:33:34.732150 | orchestrator | Thursday 19 June 2025 10:29:03 +0000 (0:00:00.297) 0:06:43.203 ********* 2025-06-19 10:33:34.732155 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.732160 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.732164 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.732169 | orchestrator | 2025-06-19 10:33:34.732174 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-19 10:33:34.732178 | orchestrator | Thursday 19 June 2025 10:29:05 +0000 (0:00:01.175) 0:06:44.379 ********* 2025-06-19 10:33:34.732183 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-19 10:33:34.732188 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-19 10:33:34.732193 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-19 10:33:34.732197 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.732202 | orchestrator | 2025-06-19 10:33:34.732207 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-19 10:33:34.732211 | orchestrator | Thursday 19 June 2025 10:29:05 +0000 (0:00:00.583) 0:06:44.963 ********* 2025-06-19 10:33:34.732216 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.732221 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.732230 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.732234 | orchestrator | 2025-06-19 10:33:34.732242 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-19 10:33:34.732247 | orchestrator | 2025-06-19 10:33:34.732252 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-19 10:33:34.732256 | orchestrator | Thursday 19 June 2025 10:29:06 +0000 (0:00:00.679) 0:06:45.643 ********* 2025-06-19 10:33:34.732261 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.732266 | orchestrator | 2025-06-19 10:33:34.732271 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-19 10:33:34.732275 | orchestrator | Thursday 19 June 2025 10:29:06 +0000 (0:00:00.465) 0:06:46.108 ********* 2025-06-19 10:33:34.732280 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.732285 | orchestrator | 2025-06-19 10:33:34.732290 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-19 10:33:34.732294 | orchestrator | Thursday 19 June 2025 10:29:07 +0000 (0:00:00.691) 0:06:46.800 ********* 2025-06-19 10:33:34.732299 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.732304 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.732308 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.732313 | orchestrator | 2025-06-19 10:33:34.732318 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-19 10:33:34.732322 | orchestrator | Thursday 19 June 2025 10:29:07 +0000 (0:00:00.320) 0:06:47.120 ********* 2025-06-19 10:33:34.732327 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732332 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732337 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732341 | orchestrator | 2025-06-19 10:33:34.732346 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-19 10:33:34.732351 | orchestrator | Thursday 19 June 2025 10:29:08 +0000 (0:00:00.665) 0:06:47.786 ********* 2025-06-19 10:33:34.732355 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732360 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732365 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732369 | orchestrator | 2025-06-19 10:33:34.732374 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-19 10:33:34.732379 | orchestrator | Thursday 19 June 2025 10:29:09 +0000 (0:00:00.690) 0:06:48.476 ********* 2025-06-19 10:33:34.732383 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732388 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732393 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732397 | orchestrator | 2025-06-19 10:33:34.732402 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-19 10:33:34.732407 | orchestrator | Thursday 19 June 2025 10:29:10 +0000 (0:00:00.981) 0:06:49.458 ********* 2025-06-19 10:33:34.732411 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.732416 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.732421 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.732425 | orchestrator | 2025-06-19 10:33:34.732430 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-19 10:33:34.732481 | orchestrator | Thursday 19 June 2025 10:29:10 +0000 (0:00:00.385) 0:06:49.844 ********* 2025-06-19 10:33:34.732489 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.732494 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.732499 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.732504 | orchestrator | 2025-06-19 10:33:34.732508 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-19 10:33:34.732513 | orchestrator | Thursday 19 June 2025 10:29:10 +0000 (0:00:00.291) 0:06:50.135 ********* 2025-06-19 10:33:34.732518 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.732523 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.732527 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.732536 | orchestrator | 2025-06-19 10:33:34.732541 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-19 10:33:34.732546 | orchestrator | Thursday 19 June 2025 10:29:11 +0000 (0:00:00.289) 0:06:50.425 ********* 2025-06-19 10:33:34.732551 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732555 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732560 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732565 | orchestrator | 2025-06-19 10:33:34.732569 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-19 10:33:34.732574 | orchestrator | Thursday 19 June 2025 10:29:12 +0000 (0:00:00.911) 0:06:51.336 ********* 2025-06-19 10:33:34.732579 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732583 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732588 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732593 | orchestrator | 2025-06-19 10:33:34.732597 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-19 10:33:34.732602 | orchestrator | Thursday 19 June 2025 10:29:12 +0000 (0:00:00.707) 0:06:52.043 ********* 2025-06-19 10:33:34.732607 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.732612 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.732616 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.732621 | orchestrator | 2025-06-19 10:33:34.732626 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-19 10:33:34.732630 | orchestrator | Thursday 19 June 2025 10:29:13 +0000 (0:00:00.306) 0:06:52.350 ********* 2025-06-19 10:33:34.732635 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.732640 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.732644 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.732649 | orchestrator | 2025-06-19 10:33:34.732654 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-19 10:33:34.732659 | orchestrator | Thursday 19 June 2025 10:29:13 +0000 (0:00:00.294) 0:06:52.645 ********* 2025-06-19 10:33:34.732663 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732668 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732673 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732677 | orchestrator | 2025-06-19 10:33:34.732682 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-19 10:33:34.732687 | orchestrator | Thursday 19 June 2025 10:29:13 +0000 (0:00:00.554) 0:06:53.200 ********* 2025-06-19 10:33:34.732692 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732696 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732704 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732709 | orchestrator | 2025-06-19 10:33:34.732714 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-19 10:33:34.732719 | orchestrator | Thursday 19 June 2025 10:29:14 +0000 (0:00:00.327) 0:06:53.527 ********* 2025-06-19 10:33:34.732723 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732728 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732733 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732737 | orchestrator | 2025-06-19 10:33:34.732742 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-19 10:33:34.732747 | orchestrator | Thursday 19 June 2025 10:29:14 +0000 (0:00:00.334) 0:06:53.862 ********* 2025-06-19 10:33:34.732752 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.732756 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.732761 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.732766 | orchestrator | 2025-06-19 10:33:34.732771 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-19 10:33:34.732775 | orchestrator | Thursday 19 June 2025 10:29:14 +0000 (0:00:00.287) 0:06:54.149 ********* 2025-06-19 10:33:34.732780 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.732785 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.732789 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.732794 | orchestrator | 2025-06-19 10:33:34.732799 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-19 10:33:34.732807 | orchestrator | Thursday 19 June 2025 10:29:15 +0000 (0:00:00.555) 0:06:54.705 ********* 2025-06-19 10:33:34.732811 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.732816 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.732821 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.732825 | orchestrator | 2025-06-19 10:33:34.732830 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-19 10:33:34.732835 | orchestrator | Thursday 19 June 2025 10:29:15 +0000 (0:00:00.313) 0:06:55.019 ********* 2025-06-19 10:33:34.732839 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732844 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732849 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732854 | orchestrator | 2025-06-19 10:33:34.732859 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-19 10:33:34.732863 | orchestrator | Thursday 19 June 2025 10:29:16 +0000 (0:00:00.318) 0:06:55.338 ********* 2025-06-19 10:33:34.732868 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732872 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732877 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732881 | orchestrator | 2025-06-19 10:33:34.732886 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-19 10:33:34.732890 | orchestrator | Thursday 19 June 2025 10:29:16 +0000 (0:00:00.866) 0:06:56.204 ********* 2025-06-19 10:33:34.732895 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.732899 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.732903 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.732908 | orchestrator | 2025-06-19 10:33:34.732912 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-19 10:33:34.732917 | orchestrator | Thursday 19 June 2025 10:29:17 +0000 (0:00:00.342) 0:06:56.547 ********* 2025-06-19 10:33:34.732922 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-19 10:33:34.732929 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-19 10:33:34.732934 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-19 10:33:34.732938 | orchestrator | 2025-06-19 10:33:34.732943 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-19 10:33:34.732947 | orchestrator | Thursday 19 June 2025 10:29:17 +0000 (0:00:00.613) 0:06:57.160 ********* 2025-06-19 10:33:34.732952 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.732956 | orchestrator | 2025-06-19 10:33:34.732961 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-19 10:33:34.732965 | orchestrator | Thursday 19 June 2025 10:29:18 +0000 (0:00:00.518) 0:06:57.679 ********* 2025-06-19 10:33:34.732969 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.732974 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.732978 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.732983 | orchestrator | 2025-06-19 10:33:34.732987 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-19 10:33:34.732992 | orchestrator | Thursday 19 June 2025 10:29:18 +0000 (0:00:00.528) 0:06:58.208 ********* 2025-06-19 10:33:34.732996 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.733001 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.733005 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.733010 | orchestrator | 2025-06-19 10:33:34.733014 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-19 10:33:34.733018 | orchestrator | Thursday 19 June 2025 10:29:19 +0000 (0:00:00.299) 0:06:58.508 ********* 2025-06-19 10:33:34.733023 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.733027 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.733032 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.733036 | orchestrator | 2025-06-19 10:33:34.733041 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-19 10:33:34.733049 | orchestrator | Thursday 19 June 2025 10:29:19 +0000 (0:00:00.583) 0:06:59.091 ********* 2025-06-19 10:33:34.733053 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.733057 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.733062 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.733066 | orchestrator | 2025-06-19 10:33:34.733071 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-19 10:33:34.733075 | orchestrator | Thursday 19 June 2025 10:29:20 +0000 (0:00:00.375) 0:06:59.467 ********* 2025-06-19 10:33:34.733080 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-19 10:33:34.733084 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-19 10:33:34.733091 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-19 10:33:34.733096 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-19 10:33:34.733100 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-19 10:33:34.733105 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-19 10:33:34.733109 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-19 10:33:34.733113 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-19 10:33:34.733118 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-19 10:33:34.733122 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-19 10:33:34.733127 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-19 10:33:34.733131 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-19 10:33:34.733135 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-19 10:33:34.733140 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-19 10:33:34.733144 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-19 10:33:34.733149 | orchestrator | 2025-06-19 10:33:34.733153 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-19 10:33:34.733158 | orchestrator | Thursday 19 June 2025 10:29:24 +0000 (0:00:04.119) 0:07:03.587 ********* 2025-06-19 10:33:34.733162 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.733166 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.733171 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.733175 | orchestrator | 2025-06-19 10:33:34.733180 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-19 10:33:34.733184 | orchestrator | Thursday 19 June 2025 10:29:24 +0000 (0:00:00.315) 0:07:03.902 ********* 2025-06-19 10:33:34.733188 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.733193 | orchestrator | 2025-06-19 10:33:34.733197 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-19 10:33:34.733202 | orchestrator | Thursday 19 June 2025 10:29:25 +0000 (0:00:00.504) 0:07:04.407 ********* 2025-06-19 10:33:34.733206 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-19 10:33:34.733211 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-19 10:33:34.733215 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-19 10:33:34.733222 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-19 10:33:34.733226 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-19 10:33:34.733231 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-19 10:33:34.733239 | orchestrator | 2025-06-19 10:33:34.733243 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-19 10:33:34.733248 | orchestrator | Thursday 19 June 2025 10:29:26 +0000 (0:00:01.152) 0:07:05.559 ********* 2025-06-19 10:33:34.733252 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.733257 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-19 10:33:34.733261 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-19 10:33:34.733265 | orchestrator | 2025-06-19 10:33:34.733270 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-19 10:33:34.733274 | orchestrator | Thursday 19 June 2025 10:29:28 +0000 (0:00:02.111) 0:07:07.671 ********* 2025-06-19 10:33:34.733279 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-19 10:33:34.733283 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-19 10:33:34.733288 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.733292 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-19 10:33:34.733297 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-19 10:33:34.733301 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.733305 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-19 10:33:34.733310 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-19 10:33:34.733314 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.733319 | orchestrator | 2025-06-19 10:33:34.733323 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-19 10:33:34.733328 | orchestrator | Thursday 19 June 2025 10:29:29 +0000 (0:00:01.282) 0:07:08.953 ********* 2025-06-19 10:33:34.733332 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:33:34.733337 | orchestrator | 2025-06-19 10:33:34.733341 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-19 10:33:34.733346 | orchestrator | Thursday 19 June 2025 10:29:31 +0000 (0:00:02.189) 0:07:11.143 ********* 2025-06-19 10:33:34.733350 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.733355 | orchestrator | 2025-06-19 10:33:34.733359 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-19 10:33:34.733364 | orchestrator | Thursday 19 June 2025 10:29:32 +0000 (0:00:00.715) 0:07:11.859 ********* 2025-06-19 10:33:34.733368 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6ed986be-d550-5e98-86ee-1d899c3b1ca9', 'data_vg': 'ceph-6ed986be-d550-5e98-86ee-1d899c3b1ca9'}) 2025-06-19 10:33:34.733376 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-3f69fe47-683a-554f-92f7-031e2a26df27', 'data_vg': 'ceph-3f69fe47-683a-554f-92f7-031e2a26df27'}) 2025-06-19 10:33:34.733381 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3c3fffd7-e076-56d5-815a-37625d7b3693', 'data_vg': 'ceph-3c3fffd7-e076-56d5-815a-37625d7b3693'}) 2025-06-19 10:33:34.733386 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-04cfa187-5820-5d05-93de-747bac6f19c1', 'data_vg': 'ceph-04cfa187-5820-5d05-93de-747bac6f19c1'}) 2025-06-19 10:33:34.733390 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-79abc216-b4ba-5883-a19f-da26bd64d731', 'data_vg': 'ceph-79abc216-b4ba-5883-a19f-da26bd64d731'}) 2025-06-19 10:33:34.733395 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-eebf63d4-54bc-5b4a-b141-3683d252bf06', 'data_vg': 'ceph-eebf63d4-54bc-5b4a-b141-3683d252bf06'}) 2025-06-19 10:33:34.733399 | orchestrator | 2025-06-19 10:33:34.733404 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-19 10:33:34.733408 | orchestrator | Thursday 19 June 2025 10:30:13 +0000 (0:00:40.955) 0:07:52.815 ********* 2025-06-19 10:33:34.733413 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.733417 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.733421 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.733426 | orchestrator | 2025-06-19 10:33:34.733447 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-19 10:33:34.733456 | orchestrator | Thursday 19 June 2025 10:30:13 +0000 (0:00:00.298) 0:07:53.113 ********* 2025-06-19 10:33:34.733463 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.733470 | orchestrator | 2025-06-19 10:33:34.733478 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-19 10:33:34.733483 | orchestrator | Thursday 19 June 2025 10:30:14 +0000 (0:00:00.728) 0:07:53.842 ********* 2025-06-19 10:33:34.733488 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.733492 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.733497 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.733501 | orchestrator | 2025-06-19 10:33:34.733505 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-19 10:33:34.733510 | orchestrator | Thursday 19 June 2025 10:30:15 +0000 (0:00:00.649) 0:07:54.491 ********* 2025-06-19 10:33:34.733514 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.733519 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.733523 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.733528 | orchestrator | 2025-06-19 10:33:34.733532 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-19 10:33:34.733537 | orchestrator | Thursday 19 June 2025 10:30:17 +0000 (0:00:02.631) 0:07:57.123 ********* 2025-06-19 10:33:34.733541 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.733546 | orchestrator | 2025-06-19 10:33:34.733553 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-19 10:33:34.733557 | orchestrator | Thursday 19 June 2025 10:30:18 +0000 (0:00:00.502) 0:07:57.625 ********* 2025-06-19 10:33:34.733562 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.733566 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.733571 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.733575 | orchestrator | 2025-06-19 10:33:34.733580 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-19 10:33:34.733584 | orchestrator | Thursday 19 June 2025 10:30:19 +0000 (0:00:01.496) 0:07:59.122 ********* 2025-06-19 10:33:34.733589 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.733593 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.733597 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.733602 | orchestrator | 2025-06-19 10:33:34.733606 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-19 10:33:34.733611 | orchestrator | Thursday 19 June 2025 10:30:21 +0000 (0:00:01.202) 0:08:00.324 ********* 2025-06-19 10:33:34.733615 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.733620 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.733624 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.733629 | orchestrator | 2025-06-19 10:33:34.733633 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-19 10:33:34.733637 | orchestrator | Thursday 19 June 2025 10:30:23 +0000 (0:00:01.985) 0:08:02.309 ********* 2025-06-19 10:33:34.733642 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.733646 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.733651 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.733655 | orchestrator | 2025-06-19 10:33:34.733659 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-19 10:33:34.733664 | orchestrator | Thursday 19 June 2025 10:30:23 +0000 (0:00:00.292) 0:08:02.601 ********* 2025-06-19 10:33:34.733668 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.733673 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.733677 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.733682 | orchestrator | 2025-06-19 10:33:34.733686 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-19 10:33:34.733691 | orchestrator | Thursday 19 June 2025 10:30:23 +0000 (0:00:00.563) 0:08:03.165 ********* 2025-06-19 10:33:34.733701 | orchestrator | ok: [testbed-node-3] => (item=1) 2025-06-19 10:33:34.733706 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-06-19 10:33:34.733710 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-06-19 10:33:34.733714 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-06-19 10:33:34.733719 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-19 10:33:34.733723 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-19 10:33:34.733728 | orchestrator | 2025-06-19 10:33:34.733732 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-19 10:33:34.733737 | orchestrator | Thursday 19 June 2025 10:30:24 +0000 (0:00:00.993) 0:08:04.159 ********* 2025-06-19 10:33:34.733741 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-06-19 10:33:34.733746 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-19 10:33:34.733750 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-19 10:33:34.733755 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-19 10:33:34.733759 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-06-19 10:33:34.733763 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-19 10:33:34.733768 | orchestrator | 2025-06-19 10:33:34.733772 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-19 10:33:34.733777 | orchestrator | Thursday 19 June 2025 10:30:27 +0000 (0:00:02.210) 0:08:06.370 ********* 2025-06-19 10:33:34.733781 | orchestrator | changed: [testbed-node-3] => (item=1) 2025-06-19 10:33:34.733786 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-19 10:33:34.733790 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-06-19 10:33:34.733795 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-19 10:33:34.733799 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-06-19 10:33:34.733803 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-19 10:33:34.733808 | orchestrator | 2025-06-19 10:33:34.733812 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-19 10:33:34.733817 | orchestrator | Thursday 19 June 2025 10:30:31 +0000 (0:00:03.929) 0:08:10.300 ********* 2025-06-19 10:33:34.733821 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.733826 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.733830 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:33:34.733835 | orchestrator | 2025-06-19 10:33:34.733839 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-19 10:33:34.733843 | orchestrator | Thursday 19 June 2025 10:30:33 +0000 (0:00:02.836) 0:08:13.136 ********* 2025-06-19 10:33:34.733848 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.733852 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.733857 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-19 10:33:34.733861 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:33:34.733866 | orchestrator | 2025-06-19 10:33:34.733870 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-19 10:33:34.733875 | orchestrator | Thursday 19 June 2025 10:30:46 +0000 (0:00:12.478) 0:08:25.615 ********* 2025-06-19 10:33:34.733879 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.733883 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.733888 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.733892 | orchestrator | 2025-06-19 10:33:34.733897 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-19 10:33:34.733901 | orchestrator | Thursday 19 June 2025 10:30:47 +0000 (0:00:01.041) 0:08:26.656 ********* 2025-06-19 10:33:34.733905 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.733910 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.733944 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.733954 | orchestrator | 2025-06-19 10:33:34.733959 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-19 10:33:34.733966 | orchestrator | Thursday 19 June 2025 10:30:47 +0000 (0:00:00.340) 0:08:26.997 ********* 2025-06-19 10:33:34.733975 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.733979 | orchestrator | 2025-06-19 10:33:34.733984 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-19 10:33:34.733988 | orchestrator | Thursday 19 June 2025 10:30:48 +0000 (0:00:00.527) 0:08:27.525 ********* 2025-06-19 10:33:34.733993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.733997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.734002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.734006 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734010 | orchestrator | 2025-06-19 10:33:34.734040 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-19 10:33:34.734045 | orchestrator | Thursday 19 June 2025 10:30:49 +0000 (0:00:00.863) 0:08:28.389 ********* 2025-06-19 10:33:34.734050 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734054 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734059 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734063 | orchestrator | 2025-06-19 10:33:34.734068 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-19 10:33:34.734072 | orchestrator | Thursday 19 June 2025 10:30:49 +0000 (0:00:00.311) 0:08:28.700 ********* 2025-06-19 10:33:34.734076 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734081 | orchestrator | 2025-06-19 10:33:34.734085 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-19 10:33:34.734090 | orchestrator | Thursday 19 June 2025 10:30:49 +0000 (0:00:00.224) 0:08:28.925 ********* 2025-06-19 10:33:34.734094 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734099 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734103 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734108 | orchestrator | 2025-06-19 10:33:34.734112 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-19 10:33:34.734116 | orchestrator | Thursday 19 June 2025 10:30:49 +0000 (0:00:00.297) 0:08:29.223 ********* 2025-06-19 10:33:34.734121 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734125 | orchestrator | 2025-06-19 10:33:34.734130 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-19 10:33:34.734134 | orchestrator | Thursday 19 June 2025 10:30:50 +0000 (0:00:00.226) 0:08:29.449 ********* 2025-06-19 10:33:34.734139 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734143 | orchestrator | 2025-06-19 10:33:34.734148 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-19 10:33:34.734152 | orchestrator | Thursday 19 June 2025 10:30:50 +0000 (0:00:00.195) 0:08:29.644 ********* 2025-06-19 10:33:34.734157 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734161 | orchestrator | 2025-06-19 10:33:34.734168 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-19 10:33:34.734173 | orchestrator | Thursday 19 June 2025 10:30:50 +0000 (0:00:00.125) 0:08:29.769 ********* 2025-06-19 10:33:34.734178 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734182 | orchestrator | 2025-06-19 10:33:34.734186 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-19 10:33:34.734191 | orchestrator | Thursday 19 June 2025 10:30:50 +0000 (0:00:00.206) 0:08:29.976 ********* 2025-06-19 10:33:34.734195 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734200 | orchestrator | 2025-06-19 10:33:34.734204 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-19 10:33:34.734209 | orchestrator | Thursday 19 June 2025 10:30:51 +0000 (0:00:00.717) 0:08:30.693 ********* 2025-06-19 10:33:34.734213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.734218 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.734222 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.734230 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734235 | orchestrator | 2025-06-19 10:33:34.734239 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-19 10:33:34.734244 | orchestrator | Thursday 19 June 2025 10:30:51 +0000 (0:00:00.382) 0:08:31.076 ********* 2025-06-19 10:33:34.734248 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734252 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734257 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734261 | orchestrator | 2025-06-19 10:33:34.734266 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-19 10:33:34.734270 | orchestrator | Thursday 19 June 2025 10:30:52 +0000 (0:00:00.337) 0:08:31.413 ********* 2025-06-19 10:33:34.734275 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734279 | orchestrator | 2025-06-19 10:33:34.734284 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-19 10:33:34.734288 | orchestrator | Thursday 19 June 2025 10:30:52 +0000 (0:00:00.210) 0:08:31.623 ********* 2025-06-19 10:33:34.734293 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734297 | orchestrator | 2025-06-19 10:33:34.734301 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-19 10:33:34.734306 | orchestrator | 2025-06-19 10:33:34.734310 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-19 10:33:34.734315 | orchestrator | Thursday 19 June 2025 10:30:53 +0000 (0:00:00.919) 0:08:32.542 ********* 2025-06-19 10:33:34.734319 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.734324 | orchestrator | 2025-06-19 10:33:34.734329 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-19 10:33:34.734333 | orchestrator | Thursday 19 June 2025 10:30:54 +0000 (0:00:01.156) 0:08:33.699 ********* 2025-06-19 10:33:34.734341 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.734346 | orchestrator | 2025-06-19 10:33:34.734350 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-19 10:33:34.734355 | orchestrator | Thursday 19 June 2025 10:30:55 +0000 (0:00:01.001) 0:08:34.700 ********* 2025-06-19 10:33:34.734359 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734364 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734368 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734372 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.734377 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.734381 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.734386 | orchestrator | 2025-06-19 10:33:34.734390 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-19 10:33:34.734395 | orchestrator | Thursday 19 June 2025 10:30:56 +0000 (0:00:01.279) 0:08:35.980 ********* 2025-06-19 10:33:34.734399 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734404 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734408 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.734413 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734417 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.734422 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.734426 | orchestrator | 2025-06-19 10:33:34.734431 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-19 10:33:34.734464 | orchestrator | Thursday 19 June 2025 10:30:57 +0000 (0:00:00.725) 0:08:36.706 ********* 2025-06-19 10:33:34.734470 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.734475 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734479 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734484 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.734488 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734496 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.734501 | orchestrator | 2025-06-19 10:33:34.734505 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-19 10:33:34.734510 | orchestrator | Thursday 19 June 2025 10:30:58 +0000 (0:00:00.969) 0:08:37.676 ********* 2025-06-19 10:33:34.734514 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734519 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734523 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.734528 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734532 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.734537 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.734541 | orchestrator | 2025-06-19 10:33:34.734546 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-19 10:33:34.734550 | orchestrator | Thursday 19 June 2025 10:30:59 +0000 (0:00:00.655) 0:08:38.332 ********* 2025-06-19 10:33:34.734555 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734559 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734564 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734568 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.734573 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.734577 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.734582 | orchestrator | 2025-06-19 10:33:34.734589 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-19 10:33:34.734593 | orchestrator | Thursday 19 June 2025 10:31:00 +0000 (0:00:01.233) 0:08:39.565 ********* 2025-06-19 10:33:34.734598 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734602 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734607 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734611 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734616 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734620 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734624 | orchestrator | 2025-06-19 10:33:34.734628 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-19 10:33:34.734632 | orchestrator | Thursday 19 June 2025 10:31:00 +0000 (0:00:00.591) 0:08:40.157 ********* 2025-06-19 10:33:34.734636 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734640 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734644 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734648 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734652 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734656 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734660 | orchestrator | 2025-06-19 10:33:34.734664 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-19 10:33:34.734668 | orchestrator | Thursday 19 June 2025 10:31:01 +0000 (0:00:00.834) 0:08:40.992 ********* 2025-06-19 10:33:34.734672 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.734676 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.734680 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.734684 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.734688 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.734692 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.734696 | orchestrator | 2025-06-19 10:33:34.734701 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-19 10:33:34.734705 | orchestrator | Thursday 19 June 2025 10:31:02 +0000 (0:00:00.967) 0:08:41.960 ********* 2025-06-19 10:33:34.734709 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.734713 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.734717 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.734721 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.734725 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.734729 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.734733 | orchestrator | 2025-06-19 10:33:34.734737 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-19 10:33:34.734741 | orchestrator | Thursday 19 June 2025 10:31:03 +0000 (0:00:01.279) 0:08:43.239 ********* 2025-06-19 10:33:34.734748 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734752 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734756 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734760 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734764 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734768 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734772 | orchestrator | 2025-06-19 10:33:34.734776 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-19 10:33:34.734780 | orchestrator | Thursday 19 June 2025 10:31:04 +0000 (0:00:00.654) 0:08:43.894 ********* 2025-06-19 10:33:34.734784 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734788 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734794 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734799 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.734803 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.734807 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.734811 | orchestrator | 2025-06-19 10:33:34.734815 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-19 10:33:34.734819 | orchestrator | Thursday 19 June 2025 10:31:05 +0000 (0:00:01.193) 0:08:45.087 ********* 2025-06-19 10:33:34.734823 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.734827 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.734831 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.734835 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734839 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734843 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734847 | orchestrator | 2025-06-19 10:33:34.734851 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-19 10:33:34.734855 | orchestrator | Thursday 19 June 2025 10:31:06 +0000 (0:00:00.660) 0:08:45.747 ********* 2025-06-19 10:33:34.734859 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.734863 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.734867 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.734872 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734876 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734880 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734884 | orchestrator | 2025-06-19 10:33:34.734888 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-19 10:33:34.734892 | orchestrator | Thursday 19 June 2025 10:31:07 +0000 (0:00:00.843) 0:08:46.590 ********* 2025-06-19 10:33:34.734896 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.734900 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.734904 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.734908 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734912 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734916 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734920 | orchestrator | 2025-06-19 10:33:34.734924 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-19 10:33:34.734928 | orchestrator | Thursday 19 June 2025 10:31:07 +0000 (0:00:00.643) 0:08:47.234 ********* 2025-06-19 10:33:34.734932 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734936 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734940 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734944 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734948 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734952 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734956 | orchestrator | 2025-06-19 10:33:34.734960 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-19 10:33:34.734964 | orchestrator | Thursday 19 June 2025 10:31:08 +0000 (0:00:00.868) 0:08:48.103 ********* 2025-06-19 10:33:34.734968 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.734972 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.734976 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.734981 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:33:34.734990 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:33:34.734994 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:33:34.734998 | orchestrator | 2025-06-19 10:33:34.735003 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-19 10:33:34.735007 | orchestrator | Thursday 19 June 2025 10:31:09 +0000 (0:00:00.592) 0:08:48.696 ********* 2025-06-19 10:33:34.735011 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.735015 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735019 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735023 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.735027 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.735031 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.735035 | orchestrator | 2025-06-19 10:33:34.735039 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-19 10:33:34.735043 | orchestrator | Thursday 19 June 2025 10:31:10 +0000 (0:00:00.926) 0:08:49.622 ********* 2025-06-19 10:33:34.735047 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735051 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735055 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735059 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.735063 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.735067 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.735071 | orchestrator | 2025-06-19 10:33:34.735075 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-19 10:33:34.735080 | orchestrator | Thursday 19 June 2025 10:31:11 +0000 (0:00:00.690) 0:08:50.313 ********* 2025-06-19 10:33:34.735084 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735088 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735092 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735096 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.735100 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.735104 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.735108 | orchestrator | 2025-06-19 10:33:34.735112 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-19 10:33:34.735116 | orchestrator | Thursday 19 June 2025 10:31:12 +0000 (0:00:01.259) 0:08:51.572 ********* 2025-06-19 10:33:34.735120 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:33:34.735124 | orchestrator | 2025-06-19 10:33:34.735128 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-19 10:33:34.735132 | orchestrator | Thursday 19 June 2025 10:31:16 +0000 (0:00:03.982) 0:08:55.555 ********* 2025-06-19 10:33:34.735136 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:33:34.735140 | orchestrator | 2025-06-19 10:33:34.735144 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-19 10:33:34.735148 | orchestrator | Thursday 19 June 2025 10:31:18 +0000 (0:00:01.994) 0:08:57.549 ********* 2025-06-19 10:33:34.735152 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.735156 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.735160 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.735164 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.735169 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.735173 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.735177 | orchestrator | 2025-06-19 10:33:34.735181 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-19 10:33:34.735185 | orchestrator | Thursday 19 June 2025 10:31:20 +0000 (0:00:01.726) 0:08:59.276 ********* 2025-06-19 10:33:34.735190 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.735195 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.735199 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.735203 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.735207 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.735211 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.735215 | orchestrator | 2025-06-19 10:33:34.735219 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-19 10:33:34.735226 | orchestrator | Thursday 19 June 2025 10:31:20 +0000 (0:00:00.938) 0:09:00.214 ********* 2025-06-19 10:33:34.735230 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.735234 | orchestrator | 2025-06-19 10:33:34.735238 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-19 10:33:34.735242 | orchestrator | Thursday 19 June 2025 10:31:22 +0000 (0:00:01.210) 0:09:01.425 ********* 2025-06-19 10:33:34.735247 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.735250 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.735255 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.735259 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.735263 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.735267 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.735271 | orchestrator | 2025-06-19 10:33:34.735275 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-19 10:33:34.735279 | orchestrator | Thursday 19 June 2025 10:31:24 +0000 (0:00:02.004) 0:09:03.429 ********* 2025-06-19 10:33:34.735283 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.735287 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.735291 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.735295 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.735299 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.735303 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.735307 | orchestrator | 2025-06-19 10:33:34.735311 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-19 10:33:34.735315 | orchestrator | Thursday 19 June 2025 10:31:27 +0000 (0:00:03.644) 0:09:07.073 ********* 2025-06-19 10:33:34.735319 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:33:34.735323 | orchestrator | 2025-06-19 10:33:34.735327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-19 10:33:34.735331 | orchestrator | Thursday 19 June 2025 10:31:29 +0000 (0:00:01.289) 0:09:08.363 ********* 2025-06-19 10:33:34.735335 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735339 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735343 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735347 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.735354 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.735358 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.735362 | orchestrator | 2025-06-19 10:33:34.735366 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-19 10:33:34.735370 | orchestrator | Thursday 19 June 2025 10:31:29 +0000 (0:00:00.784) 0:09:09.148 ********* 2025-06-19 10:33:34.735374 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.735378 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.735382 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.735386 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:33:34.735390 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:33:34.735394 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:33:34.735398 | orchestrator | 2025-06-19 10:33:34.735402 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-19 10:33:34.735406 | orchestrator | Thursday 19 June 2025 10:31:31 +0000 (0:00:02.091) 0:09:11.240 ********* 2025-06-19 10:33:34.735410 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735415 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735419 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735423 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:33:34.735427 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:33:34.735431 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:33:34.735448 | orchestrator | 2025-06-19 10:33:34.735452 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-19 10:33:34.735460 | orchestrator | 2025-06-19 10:33:34.735464 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-19 10:33:34.735468 | orchestrator | Thursday 19 June 2025 10:31:32 +0000 (0:00:01.015) 0:09:12.256 ********* 2025-06-19 10:33:34.735472 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.735476 | orchestrator | 2025-06-19 10:33:34.735480 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-19 10:33:34.735485 | orchestrator | Thursday 19 June 2025 10:31:33 +0000 (0:00:00.731) 0:09:12.988 ********* 2025-06-19 10:33:34.735489 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.735493 | orchestrator | 2025-06-19 10:33:34.735497 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-19 10:33:34.735501 | orchestrator | Thursday 19 June 2025 10:31:34 +0000 (0:00:00.531) 0:09:13.519 ********* 2025-06-19 10:33:34.735505 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.735509 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735513 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735517 | orchestrator | 2025-06-19 10:33:34.735521 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-19 10:33:34.735525 | orchestrator | Thursday 19 June 2025 10:31:34 +0000 (0:00:00.344) 0:09:13.864 ********* 2025-06-19 10:33:34.735529 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735533 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735537 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735541 | orchestrator | 2025-06-19 10:33:34.735545 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-19 10:33:34.735552 | orchestrator | Thursday 19 June 2025 10:31:35 +0000 (0:00:00.982) 0:09:14.847 ********* 2025-06-19 10:33:34.735556 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735560 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735564 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735568 | orchestrator | 2025-06-19 10:33:34.735572 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-19 10:33:34.735576 | orchestrator | Thursday 19 June 2025 10:31:36 +0000 (0:00:00.749) 0:09:15.596 ********* 2025-06-19 10:33:34.735580 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735584 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735588 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735592 | orchestrator | 2025-06-19 10:33:34.735596 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-19 10:33:34.735600 | orchestrator | Thursday 19 June 2025 10:31:36 +0000 (0:00:00.670) 0:09:16.267 ********* 2025-06-19 10:33:34.735604 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.735608 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735612 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735616 | orchestrator | 2025-06-19 10:33:34.735620 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-19 10:33:34.735624 | orchestrator | Thursday 19 June 2025 10:31:37 +0000 (0:00:00.280) 0:09:16.548 ********* 2025-06-19 10:33:34.735628 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.735632 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735636 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735641 | orchestrator | 2025-06-19 10:33:34.735645 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-19 10:33:34.735649 | orchestrator | Thursday 19 June 2025 10:31:37 +0000 (0:00:00.497) 0:09:17.045 ********* 2025-06-19 10:33:34.735653 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.735657 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735661 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735665 | orchestrator | 2025-06-19 10:33:34.735669 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-19 10:33:34.735676 | orchestrator | Thursday 19 June 2025 10:31:38 +0000 (0:00:00.329) 0:09:17.375 ********* 2025-06-19 10:33:34.735680 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735684 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735688 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735692 | orchestrator | 2025-06-19 10:33:34.735696 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-19 10:33:34.735700 | orchestrator | Thursday 19 June 2025 10:31:38 +0000 (0:00:00.709) 0:09:18.084 ********* 2025-06-19 10:33:34.735704 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735708 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735713 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735716 | orchestrator | 2025-06-19 10:33:34.735721 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-19 10:33:34.735725 | orchestrator | Thursday 19 June 2025 10:31:39 +0000 (0:00:00.746) 0:09:18.831 ********* 2025-06-19 10:33:34.735729 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.735735 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735739 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735743 | orchestrator | 2025-06-19 10:33:34.735747 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-19 10:33:34.735751 | orchestrator | Thursday 19 June 2025 10:31:40 +0000 (0:00:00.528) 0:09:19.359 ********* 2025-06-19 10:33:34.735756 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.735760 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735763 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735768 | orchestrator | 2025-06-19 10:33:34.735772 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-19 10:33:34.735776 | orchestrator | Thursday 19 June 2025 10:31:40 +0000 (0:00:00.302) 0:09:19.661 ********* 2025-06-19 10:33:34.735780 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735784 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735788 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735792 | orchestrator | 2025-06-19 10:33:34.735796 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-19 10:33:34.735800 | orchestrator | Thursday 19 June 2025 10:31:40 +0000 (0:00:00.321) 0:09:19.983 ********* 2025-06-19 10:33:34.735804 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735808 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735812 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735816 | orchestrator | 2025-06-19 10:33:34.735820 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-19 10:33:34.735824 | orchestrator | Thursday 19 June 2025 10:31:41 +0000 (0:00:00.361) 0:09:20.344 ********* 2025-06-19 10:33:34.735828 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735832 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735836 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735840 | orchestrator | 2025-06-19 10:33:34.735844 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-19 10:33:34.735848 | orchestrator | Thursday 19 June 2025 10:31:41 +0000 (0:00:00.581) 0:09:20.926 ********* 2025-06-19 10:33:34.735853 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.735857 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735861 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735865 | orchestrator | 2025-06-19 10:33:34.735869 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-19 10:33:34.735873 | orchestrator | Thursday 19 June 2025 10:31:41 +0000 (0:00:00.301) 0:09:21.228 ********* 2025-06-19 10:33:34.735877 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.735881 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735885 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735889 | orchestrator | 2025-06-19 10:33:34.735893 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-19 10:33:34.735897 | orchestrator | Thursday 19 June 2025 10:31:42 +0000 (0:00:00.292) 0:09:21.520 ********* 2025-06-19 10:33:34.735906 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.735910 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735914 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735918 | orchestrator | 2025-06-19 10:33:34.735922 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-19 10:33:34.735926 | orchestrator | Thursday 19 June 2025 10:31:42 +0000 (0:00:00.289) 0:09:21.809 ********* 2025-06-19 10:33:34.735930 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735936 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735940 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735944 | orchestrator | 2025-06-19 10:33:34.735948 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-19 10:33:34.735952 | orchestrator | Thursday 19 June 2025 10:31:43 +0000 (0:00:00.568) 0:09:22.377 ********* 2025-06-19 10:33:34.735957 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.735961 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.735965 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.735969 | orchestrator | 2025-06-19 10:33:34.735973 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-19 10:33:34.735977 | orchestrator | Thursday 19 June 2025 10:31:43 +0000 (0:00:00.560) 0:09:22.938 ********* 2025-06-19 10:33:34.735981 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.735985 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.735989 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-19 10:33:34.735993 | orchestrator | 2025-06-19 10:33:34.735997 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-19 10:33:34.736001 | orchestrator | Thursday 19 June 2025 10:31:44 +0000 (0:00:00.381) 0:09:23.319 ********* 2025-06-19 10:33:34.736006 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:33:34.736010 | orchestrator | 2025-06-19 10:33:34.736014 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-19 10:33:34.736018 | orchestrator | Thursday 19 June 2025 10:31:46 +0000 (0:00:02.356) 0:09:25.676 ********* 2025-06-19 10:33:34.736022 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-19 10:33:34.736028 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.736032 | orchestrator | 2025-06-19 10:33:34.736036 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-19 10:33:34.736040 | orchestrator | Thursday 19 June 2025 10:31:46 +0000 (0:00:00.467) 0:09:26.144 ********* 2025-06-19 10:33:34.736045 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-19 10:33:34.736055 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-19 10:33:34.736059 | orchestrator | 2025-06-19 10:33:34.736064 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-19 10:33:34.736068 | orchestrator | Thursday 19 June 2025 10:31:55 +0000 (0:00:08.933) 0:09:35.077 ********* 2025-06-19 10:33:34.736072 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:33:34.736076 | orchestrator | 2025-06-19 10:33:34.736080 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-19 10:33:34.736084 | orchestrator | Thursday 19 June 2025 10:31:59 +0000 (0:00:03.653) 0:09:38.731 ********* 2025-06-19 10:33:34.736088 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.736131 | orchestrator | 2025-06-19 10:33:34.736135 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-19 10:33:34.736139 | orchestrator | Thursday 19 June 2025 10:31:59 +0000 (0:00:00.506) 0:09:39.237 ********* 2025-06-19 10:33:34.736143 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-19 10:33:34.736148 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-19 10:33:34.736152 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-19 10:33:34.736156 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-19 10:33:34.736160 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-19 10:33:34.736164 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-19 10:33:34.736168 | orchestrator | 2025-06-19 10:33:34.736172 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-19 10:33:34.736176 | orchestrator | Thursday 19 June 2025 10:32:01 +0000 (0:00:01.277) 0:09:40.514 ********* 2025-06-19 10:33:34.736180 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.736184 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-19 10:33:34.736188 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-19 10:33:34.736192 | orchestrator | 2025-06-19 10:33:34.736196 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-19 10:33:34.736200 | orchestrator | Thursday 19 June 2025 10:32:03 +0000 (0:00:02.126) 0:09:42.641 ********* 2025-06-19 10:33:34.736204 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-19 10:33:34.736208 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-19 10:33:34.736212 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.736216 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-19 10:33:34.736221 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-19 10:33:34.736225 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.736229 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-19 10:33:34.736235 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-19 10:33:34.736239 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.736243 | orchestrator | 2025-06-19 10:33:34.736247 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-19 10:33:34.736252 | orchestrator | Thursday 19 June 2025 10:32:04 +0000 (0:00:01.212) 0:09:43.853 ********* 2025-06-19 10:33:34.736256 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.736260 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.736264 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.736268 | orchestrator | 2025-06-19 10:33:34.736272 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-19 10:33:34.736276 | orchestrator | Thursday 19 June 2025 10:32:07 +0000 (0:00:02.712) 0:09:46.566 ********* 2025-06-19 10:33:34.736280 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.736284 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.736288 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.736292 | orchestrator | 2025-06-19 10:33:34.736297 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-19 10:33:34.736301 | orchestrator | Thursday 19 June 2025 10:32:07 +0000 (0:00:00.304) 0:09:46.870 ********* 2025-06-19 10:33:34.736305 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.736309 | orchestrator | 2025-06-19 10:33:34.736313 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-19 10:33:34.736317 | orchestrator | Thursday 19 June 2025 10:32:08 +0000 (0:00:00.752) 0:09:47.622 ********* 2025-06-19 10:33:34.736321 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.736325 | orchestrator | 2025-06-19 10:33:34.736333 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-19 10:33:34.736337 | orchestrator | Thursday 19 June 2025 10:32:08 +0000 (0:00:00.533) 0:09:48.156 ********* 2025-06-19 10:33:34.736341 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.736345 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.736349 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.736353 | orchestrator | 2025-06-19 10:33:34.736357 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-19 10:33:34.736361 | orchestrator | Thursday 19 June 2025 10:32:10 +0000 (0:00:01.663) 0:09:49.820 ********* 2025-06-19 10:33:34.736365 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.736370 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.736374 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.736378 | orchestrator | 2025-06-19 10:33:34.736382 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-19 10:33:34.736386 | orchestrator | Thursday 19 June 2025 10:32:11 +0000 (0:00:01.242) 0:09:51.062 ********* 2025-06-19 10:33:34.736390 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.736394 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.736401 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.736405 | orchestrator | 2025-06-19 10:33:34.736409 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-19 10:33:34.736413 | orchestrator | Thursday 19 June 2025 10:32:13 +0000 (0:00:01.955) 0:09:53.018 ********* 2025-06-19 10:33:34.736417 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.736421 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.736425 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.736429 | orchestrator | 2025-06-19 10:33:34.736445 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-19 10:33:34.736450 | orchestrator | Thursday 19 June 2025 10:32:15 +0000 (0:00:01.958) 0:09:54.977 ********* 2025-06-19 10:33:34.736454 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736458 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.736462 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.736466 | orchestrator | 2025-06-19 10:33:34.736470 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-19 10:33:34.736474 | orchestrator | Thursday 19 June 2025 10:32:17 +0000 (0:00:01.569) 0:09:56.546 ********* 2025-06-19 10:33:34.736478 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.736482 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.736486 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.736490 | orchestrator | 2025-06-19 10:33:34.736494 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-19 10:33:34.736498 | orchestrator | Thursday 19 June 2025 10:32:18 +0000 (0:00:00.741) 0:09:57.287 ********* 2025-06-19 10:33:34.736502 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.736506 | orchestrator | 2025-06-19 10:33:34.736510 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-19 10:33:34.736514 | orchestrator | Thursday 19 June 2025 10:32:18 +0000 (0:00:00.833) 0:09:58.121 ********* 2025-06-19 10:33:34.736518 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736522 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.736526 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.736530 | orchestrator | 2025-06-19 10:33:34.736534 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-19 10:33:34.736539 | orchestrator | Thursday 19 June 2025 10:32:19 +0000 (0:00:00.383) 0:09:58.504 ********* 2025-06-19 10:33:34.736543 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.736547 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.736551 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.736555 | orchestrator | 2025-06-19 10:33:34.736559 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-19 10:33:34.736563 | orchestrator | Thursday 19 June 2025 10:32:20 +0000 (0:00:01.213) 0:09:59.718 ********* 2025-06-19 10:33:34.736570 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.736574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.736578 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.736582 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.736587 | orchestrator | 2025-06-19 10:33:34.736591 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-19 10:33:34.736597 | orchestrator | Thursday 19 June 2025 10:32:21 +0000 (0:00:00.886) 0:10:00.604 ********* 2025-06-19 10:33:34.736601 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736605 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.736609 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.736613 | orchestrator | 2025-06-19 10:33:34.736617 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-19 10:33:34.736621 | orchestrator | 2025-06-19 10:33:34.736625 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-19 10:33:34.736629 | orchestrator | Thursday 19 June 2025 10:32:22 +0000 (0:00:00.861) 0:10:01.466 ********* 2025-06-19 10:33:34.736634 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.736638 | orchestrator | 2025-06-19 10:33:34.736642 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-19 10:33:34.736646 | orchestrator | Thursday 19 June 2025 10:32:22 +0000 (0:00:00.529) 0:10:01.996 ********* 2025-06-19 10:33:34.736650 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.736654 | orchestrator | 2025-06-19 10:33:34.736658 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-19 10:33:34.736662 | orchestrator | Thursday 19 June 2025 10:32:23 +0000 (0:00:00.764) 0:10:02.760 ********* 2025-06-19 10:33:34.736666 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.736670 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.736674 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.736679 | orchestrator | 2025-06-19 10:33:34.736683 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-19 10:33:34.736687 | orchestrator | Thursday 19 June 2025 10:32:23 +0000 (0:00:00.334) 0:10:03.095 ********* 2025-06-19 10:33:34.736691 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736695 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.736699 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.736703 | orchestrator | 2025-06-19 10:33:34.736707 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-19 10:33:34.736711 | orchestrator | Thursday 19 June 2025 10:32:24 +0000 (0:00:00.734) 0:10:03.829 ********* 2025-06-19 10:33:34.736715 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736719 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.736724 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.736728 | orchestrator | 2025-06-19 10:33:34.736732 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-19 10:33:34.736736 | orchestrator | Thursday 19 June 2025 10:32:25 +0000 (0:00:01.035) 0:10:04.864 ********* 2025-06-19 10:33:34.736740 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736744 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.736748 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.736752 | orchestrator | 2025-06-19 10:33:34.736759 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-19 10:33:34.736763 | orchestrator | Thursday 19 June 2025 10:32:26 +0000 (0:00:00.691) 0:10:05.556 ********* 2025-06-19 10:33:34.736767 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.736771 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.736775 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.736779 | orchestrator | 2025-06-19 10:33:34.736784 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-19 10:33:34.736791 | orchestrator | Thursday 19 June 2025 10:32:26 +0000 (0:00:00.333) 0:10:05.890 ********* 2025-06-19 10:33:34.736795 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.736799 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.736803 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.736807 | orchestrator | 2025-06-19 10:33:34.736811 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-19 10:33:34.736816 | orchestrator | Thursday 19 June 2025 10:32:26 +0000 (0:00:00.297) 0:10:06.188 ********* 2025-06-19 10:33:34.736820 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.736824 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.736828 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.736832 | orchestrator | 2025-06-19 10:33:34.736836 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-19 10:33:34.736840 | orchestrator | Thursday 19 June 2025 10:32:27 +0000 (0:00:00.637) 0:10:06.826 ********* 2025-06-19 10:33:34.736844 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736848 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.736852 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.736857 | orchestrator | 2025-06-19 10:33:34.736861 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-19 10:33:34.736865 | orchestrator | Thursday 19 June 2025 10:32:28 +0000 (0:00:00.730) 0:10:07.557 ********* 2025-06-19 10:33:34.736869 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736873 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.736877 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.736881 | orchestrator | 2025-06-19 10:33:34.736885 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-19 10:33:34.736889 | orchestrator | Thursday 19 June 2025 10:32:29 +0000 (0:00:00.814) 0:10:08.371 ********* 2025-06-19 10:33:34.736893 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.736897 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.736902 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.736906 | orchestrator | 2025-06-19 10:33:34.736910 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-19 10:33:34.736914 | orchestrator | Thursday 19 June 2025 10:32:29 +0000 (0:00:00.325) 0:10:08.697 ********* 2025-06-19 10:33:34.736918 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.736922 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.736926 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.736930 | orchestrator | 2025-06-19 10:33:34.736934 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-19 10:33:34.736938 | orchestrator | Thursday 19 June 2025 10:32:30 +0000 (0:00:00.646) 0:10:09.343 ********* 2025-06-19 10:33:34.736943 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736947 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.736951 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.736955 | orchestrator | 2025-06-19 10:33:34.736961 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-19 10:33:34.736965 | orchestrator | Thursday 19 June 2025 10:32:30 +0000 (0:00:00.326) 0:10:09.669 ********* 2025-06-19 10:33:34.736969 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736973 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.736977 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.736981 | orchestrator | 2025-06-19 10:33:34.736985 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-19 10:33:34.736989 | orchestrator | Thursday 19 June 2025 10:32:30 +0000 (0:00:00.339) 0:10:10.009 ********* 2025-06-19 10:33:34.736994 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.736998 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.737002 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.737006 | orchestrator | 2025-06-19 10:33:34.737010 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-19 10:33:34.737014 | orchestrator | Thursday 19 June 2025 10:32:31 +0000 (0:00:00.376) 0:10:10.386 ********* 2025-06-19 10:33:34.737021 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737025 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.737029 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.737033 | orchestrator | 2025-06-19 10:33:34.737038 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-19 10:33:34.737042 | orchestrator | Thursday 19 June 2025 10:32:31 +0000 (0:00:00.595) 0:10:10.982 ********* 2025-06-19 10:33:34.737046 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737050 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.737054 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.737058 | orchestrator | 2025-06-19 10:33:34.737062 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-19 10:33:34.737066 | orchestrator | Thursday 19 June 2025 10:32:32 +0000 (0:00:00.439) 0:10:11.421 ********* 2025-06-19 10:33:34.737070 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737075 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.737079 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.737083 | orchestrator | 2025-06-19 10:33:34.737087 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-19 10:33:34.737091 | orchestrator | Thursday 19 June 2025 10:32:32 +0000 (0:00:00.437) 0:10:11.858 ********* 2025-06-19 10:33:34.737095 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.737099 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.737103 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.737107 | orchestrator | 2025-06-19 10:33:34.737111 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-19 10:33:34.737115 | orchestrator | Thursday 19 June 2025 10:32:32 +0000 (0:00:00.349) 0:10:12.208 ********* 2025-06-19 10:33:34.737120 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.737124 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.737128 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.737132 | orchestrator | 2025-06-19 10:33:34.737136 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-19 10:33:34.737143 | orchestrator | Thursday 19 June 2025 10:32:33 +0000 (0:00:00.797) 0:10:13.006 ********* 2025-06-19 10:33:34.737147 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.737151 | orchestrator | 2025-06-19 10:33:34.737155 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-19 10:33:34.737159 | orchestrator | Thursday 19 June 2025 10:32:34 +0000 (0:00:00.539) 0:10:13.546 ********* 2025-06-19 10:33:34.737163 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.737167 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-19 10:33:34.737172 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-19 10:33:34.737176 | orchestrator | 2025-06-19 10:33:34.737180 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-19 10:33:34.737184 | orchestrator | Thursday 19 June 2025 10:32:36 +0000 (0:00:02.519) 0:10:16.065 ********* 2025-06-19 10:33:34.737188 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-19 10:33:34.737192 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-19 10:33:34.737196 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.737200 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-19 10:33:34.737204 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-19 10:33:34.737208 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.737212 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-19 10:33:34.737216 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-19 10:33:34.737220 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.737225 | orchestrator | 2025-06-19 10:33:34.737229 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-19 10:33:34.737233 | orchestrator | Thursday 19 June 2025 10:32:38 +0000 (0:00:01.706) 0:10:17.772 ********* 2025-06-19 10:33:34.737240 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737244 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.737248 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.737252 | orchestrator | 2025-06-19 10:33:34.737256 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-19 10:33:34.737260 | orchestrator | Thursday 19 June 2025 10:32:38 +0000 (0:00:00.307) 0:10:18.080 ********* 2025-06-19 10:33:34.737264 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.737268 | orchestrator | 2025-06-19 10:33:34.737272 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-19 10:33:34.737277 | orchestrator | Thursday 19 June 2025 10:32:39 +0000 (0:00:00.598) 0:10:18.678 ********* 2025-06-19 10:33:34.737281 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.737287 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.737292 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.737296 | orchestrator | 2025-06-19 10:33:34.737300 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-19 10:33:34.737304 | orchestrator | Thursday 19 June 2025 10:32:40 +0000 (0:00:01.308) 0:10:19.986 ********* 2025-06-19 10:33:34.737308 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.737312 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-19 10:33:34.737316 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.737320 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-19 10:33:34.737325 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.737329 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-19 10:33:34.737333 | orchestrator | 2025-06-19 10:33:34.737337 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-19 10:33:34.737341 | orchestrator | Thursday 19 June 2025 10:32:45 +0000 (0:00:04.765) 0:10:24.752 ********* 2025-06-19 10:33:34.737345 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.737349 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-19 10:33:34.737353 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.737357 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-19 10:33:34.737361 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:33:34.737366 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-19 10:33:34.737370 | orchestrator | 2025-06-19 10:33:34.737374 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-19 10:33:34.737378 | orchestrator | Thursday 19 June 2025 10:32:47 +0000 (0:00:02.392) 0:10:27.145 ********* 2025-06-19 10:33:34.737382 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-19 10:33:34.737386 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.737390 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-19 10:33:34.737397 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.737401 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-19 10:33:34.737408 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.737412 | orchestrator | 2025-06-19 10:33:34.737416 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-19 10:33:34.737420 | orchestrator | Thursday 19 June 2025 10:32:49 +0000 (0:00:01.244) 0:10:28.389 ********* 2025-06-19 10:33:34.737424 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-19 10:33:34.737428 | orchestrator | 2025-06-19 10:33:34.737433 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-19 10:33:34.737520 | orchestrator | Thursday 19 June 2025 10:32:49 +0000 (0:00:00.477) 0:10:28.867 ********* 2025-06-19 10:33:34.737526 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-19 10:33:34.737530 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-19 10:33:34.737535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-19 10:33:34.737539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-19 10:33:34.737543 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-19 10:33:34.737547 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737551 | orchestrator | 2025-06-19 10:33:34.737555 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-19 10:33:34.737559 | orchestrator | Thursday 19 June 2025 10:32:50 +0000 (0:00:00.594) 0:10:29.462 ********* 2025-06-19 10:33:34.737564 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-19 10:33:34.737568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-19 10:33:34.737572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-19 10:33:34.737576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-19 10:33:34.737580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-19 10:33:34.737589 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737593 | orchestrator | 2025-06-19 10:33:34.737597 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-19 10:33:34.737602 | orchestrator | Thursday 19 June 2025 10:32:50 +0000 (0:00:00.604) 0:10:30.066 ********* 2025-06-19 10:33:34.737606 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-19 10:33:34.737610 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-19 10:33:34.737614 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-19 10:33:34.737619 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-19 10:33:34.737623 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-19 10:33:34.737627 | orchestrator | 2025-06-19 10:33:34.737631 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-19 10:33:34.737640 | orchestrator | Thursday 19 June 2025 10:33:22 +0000 (0:00:31.368) 0:11:01.435 ********* 2025-06-19 10:33:34.737645 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737649 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.737653 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.737657 | orchestrator | 2025-06-19 10:33:34.737661 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-19 10:33:34.737665 | orchestrator | Thursday 19 June 2025 10:33:22 +0000 (0:00:00.327) 0:11:01.762 ********* 2025-06-19 10:33:34.737669 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737673 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.737677 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.737681 | orchestrator | 2025-06-19 10:33:34.737686 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-19 10:33:34.737690 | orchestrator | Thursday 19 June 2025 10:33:22 +0000 (0:00:00.319) 0:11:02.082 ********* 2025-06-19 10:33:34.737694 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.737698 | orchestrator | 2025-06-19 10:33:34.737702 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-19 10:33:34.737710 | orchestrator | Thursday 19 June 2025 10:33:23 +0000 (0:00:00.774) 0:11:02.857 ********* 2025-06-19 10:33:34.737714 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.737718 | orchestrator | 2025-06-19 10:33:34.737722 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-19 10:33:34.737726 | orchestrator | Thursday 19 June 2025 10:33:24 +0000 (0:00:00.520) 0:11:03.377 ********* 2025-06-19 10:33:34.737730 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.737734 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.737739 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.737743 | orchestrator | 2025-06-19 10:33:34.737747 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-19 10:33:34.737751 | orchestrator | Thursday 19 June 2025 10:33:25 +0000 (0:00:01.716) 0:11:05.094 ********* 2025-06-19 10:33:34.737755 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.737759 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.737763 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.737767 | orchestrator | 2025-06-19 10:33:34.737771 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-19 10:33:34.737775 | orchestrator | Thursday 19 June 2025 10:33:26 +0000 (0:00:01.104) 0:11:06.199 ********* 2025-06-19 10:33:34.737780 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:33:34.737784 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:33:34.737788 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:33:34.737792 | orchestrator | 2025-06-19 10:33:34.737796 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-19 10:33:34.737800 | orchestrator | Thursday 19 June 2025 10:33:28 +0000 (0:00:01.857) 0:11:08.056 ********* 2025-06-19 10:33:34.737804 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.737807 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.737811 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-19 10:33:34.737815 | orchestrator | 2025-06-19 10:33:34.737819 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-19 10:33:34.737822 | orchestrator | Thursday 19 June 2025 10:33:31 +0000 (0:00:02.692) 0:11:10.748 ********* 2025-06-19 10:33:34.737826 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737830 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.737834 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.737842 | orchestrator | 2025-06-19 10:33:34.737846 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-19 10:33:34.737850 | orchestrator | Thursday 19 June 2025 10:33:31 +0000 (0:00:00.329) 0:11:11.078 ********* 2025-06-19 10:33:34.737856 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:33:34.737859 | orchestrator | 2025-06-19 10:33:34.737863 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-19 10:33:34.737867 | orchestrator | Thursday 19 June 2025 10:33:32 +0000 (0:00:00.643) 0:11:11.722 ********* 2025-06-19 10:33:34.737871 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.737874 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.737878 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.737882 | orchestrator | 2025-06-19 10:33:34.737886 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-19 10:33:34.737889 | orchestrator | Thursday 19 June 2025 10:33:32 +0000 (0:00:00.272) 0:11:11.994 ********* 2025-06-19 10:33:34.737893 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737897 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:33:34.737901 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:33:34.737904 | orchestrator | 2025-06-19 10:33:34.737908 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-19 10:33:34.737912 | orchestrator | Thursday 19 June 2025 10:33:33 +0000 (0:00:00.338) 0:11:12.333 ********* 2025-06-19 10:33:34.737916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:33:34.737919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:33:34.737923 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:33:34.737927 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:33:34.737931 | orchestrator | 2025-06-19 10:33:34.737934 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-19 10:33:34.737938 | orchestrator | Thursday 19 June 2025 10:33:33 +0000 (0:00:00.691) 0:11:13.024 ********* 2025-06-19 10:33:34.737942 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:33:34.737946 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:33:34.737949 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:33:34.737953 | orchestrator | 2025-06-19 10:33:34.737957 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:33:34.737961 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-06-19 10:33:34.737965 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-19 10:33:34.737968 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-19 10:33:34.737972 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-06-19 10:33:34.737979 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-19 10:33:34.737983 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-19 10:33:34.737987 | orchestrator | 2025-06-19 10:33:34.737990 | orchestrator | 2025-06-19 10:33:34.737994 | orchestrator | 2025-06-19 10:33:34.737998 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:33:34.738002 | orchestrator | Thursday 19 June 2025 10:33:33 +0000 (0:00:00.166) 0:11:13.190 ********* 2025-06-19 10:33:34.738006 | orchestrator | =============================================================================== 2025-06-19 10:33:34.738009 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------ 104.72s 2025-06-19 10:33:34.738035 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.96s 2025-06-19 10:33:34.738040 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.37s 2025-06-19 10:33:34.738043 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.13s 2025-06-19 10:33:34.738047 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.57s 2025-06-19 10:33:34.738051 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.48s 2025-06-19 10:33:34.738055 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.68s 2025-06-19 10:33:34.738058 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.16s 2025-06-19 10:33:34.738062 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.93s 2025-06-19 10:33:34.738066 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.94s 2025-06-19 10:33:34.738070 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.39s 2025-06-19 10:33:34.738073 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.03s 2025-06-19 10:33:34.738077 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.77s 2025-06-19 10:33:34.738081 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.12s 2025-06-19 10:33:34.738084 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.98s 2025-06-19 10:33:34.738088 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.93s 2025-06-19 10:33:34.738092 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.65s 2025-06-19 10:33:34.738096 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.64s 2025-06-19 10:33:34.738099 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.19s 2025-06-19 10:33:34.738103 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.07s 2025-06-19 10:33:37.751389 | orchestrator | 2025-06-19 10:33:37 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:37.752673 | orchestrator | 2025-06-19 10:33:37 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:37.754880 | orchestrator | 2025-06-19 10:33:37 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:33:37.755623 | orchestrator | 2025-06-19 10:33:37 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:40.791637 | orchestrator | 2025-06-19 10:33:40 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:40.792131 | orchestrator | 2025-06-19 10:33:40 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:40.793724 | orchestrator | 2025-06-19 10:33:40 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:33:40.793773 | orchestrator | 2025-06-19 10:33:40 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:43.835931 | orchestrator | 2025-06-19 10:33:43 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:43.837753 | orchestrator | 2025-06-19 10:33:43 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:43.839861 | orchestrator | 2025-06-19 10:33:43 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:33:43.839956 | orchestrator | 2025-06-19 10:33:43 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:46.887950 | orchestrator | 2025-06-19 10:33:46 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:46.890119 | orchestrator | 2025-06-19 10:33:46 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:46.890855 | orchestrator | 2025-06-19 10:33:46 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:33:46.890943 | orchestrator | 2025-06-19 10:33:46 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:49.938653 | orchestrator | 2025-06-19 10:33:49 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:49.940427 | orchestrator | 2025-06-19 10:33:49 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:49.942419 | orchestrator | 2025-06-19 10:33:49 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:33:49.942526 | orchestrator | 2025-06-19 10:33:49 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:52.990558 | orchestrator | 2025-06-19 10:33:52 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:52.993968 | orchestrator | 2025-06-19 10:33:52 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:52.994002 | orchestrator | 2025-06-19 10:33:52 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:33:52.994067 | orchestrator | 2025-06-19 10:33:52 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:56.037288 | orchestrator | 2025-06-19 10:33:56 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:56.037481 | orchestrator | 2025-06-19 10:33:56 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:56.037509 | orchestrator | 2025-06-19 10:33:56 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:33:56.037847 | orchestrator | 2025-06-19 10:33:56 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:33:59.087939 | orchestrator | 2025-06-19 10:33:59 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:33:59.089363 | orchestrator | 2025-06-19 10:33:59 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:33:59.093162 | orchestrator | 2025-06-19 10:33:59 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:33:59.093202 | orchestrator | 2025-06-19 10:33:59 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:02.139894 | orchestrator | 2025-06-19 10:34:02 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:02.141170 | orchestrator | 2025-06-19 10:34:02 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:02.142864 | orchestrator | 2025-06-19 10:34:02 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:02.142893 | orchestrator | 2025-06-19 10:34:02 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:05.186764 | orchestrator | 2025-06-19 10:34:05 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:05.190119 | orchestrator | 2025-06-19 10:34:05 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:05.192290 | orchestrator | 2025-06-19 10:34:05 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:05.192321 | orchestrator | 2025-06-19 10:34:05 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:08.242540 | orchestrator | 2025-06-19 10:34:08 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:08.243678 | orchestrator | 2025-06-19 10:34:08 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:08.246130 | orchestrator | 2025-06-19 10:34:08 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:08.246256 | orchestrator | 2025-06-19 10:34:08 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:11.293925 | orchestrator | 2025-06-19 10:34:11 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:11.295703 | orchestrator | 2025-06-19 10:34:11 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:11.295741 | orchestrator | 2025-06-19 10:34:11 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:11.295754 | orchestrator | 2025-06-19 10:34:11 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:14.341405 | orchestrator | 2025-06-19 10:34:14 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:14.343108 | orchestrator | 2025-06-19 10:34:14 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:14.344995 | orchestrator | 2025-06-19 10:34:14 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:14.345020 | orchestrator | 2025-06-19 10:34:14 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:17.389445 | orchestrator | 2025-06-19 10:34:17 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:17.391863 | orchestrator | 2025-06-19 10:34:17 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:17.394474 | orchestrator | 2025-06-19 10:34:17 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:17.394500 | orchestrator | 2025-06-19 10:34:17 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:20.436190 | orchestrator | 2025-06-19 10:34:20 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:20.436917 | orchestrator | 2025-06-19 10:34:20 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:20.438417 | orchestrator | 2025-06-19 10:34:20 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:20.438442 | orchestrator | 2025-06-19 10:34:20 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:23.480515 | orchestrator | 2025-06-19 10:34:23 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:23.484476 | orchestrator | 2025-06-19 10:34:23 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:23.486379 | orchestrator | 2025-06-19 10:34:23 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:23.486408 | orchestrator | 2025-06-19 10:34:23 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:26.534480 | orchestrator | 2025-06-19 10:34:26 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:26.536003 | orchestrator | 2025-06-19 10:34:26 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:26.538957 | orchestrator | 2025-06-19 10:34:26 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:26.538993 | orchestrator | 2025-06-19 10:34:26 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:29.580982 | orchestrator | 2025-06-19 10:34:29 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:29.582436 | orchestrator | 2025-06-19 10:34:29 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:29.583779 | orchestrator | 2025-06-19 10:34:29 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:29.583839 | orchestrator | 2025-06-19 10:34:29 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:32.635191 | orchestrator | 2025-06-19 10:34:32 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:32.637104 | orchestrator | 2025-06-19 10:34:32 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:32.638862 | orchestrator | 2025-06-19 10:34:32 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:32.638910 | orchestrator | 2025-06-19 10:34:32 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:35.681537 | orchestrator | 2025-06-19 10:34:35 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:35.682868 | orchestrator | 2025-06-19 10:34:35 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:35.684712 | orchestrator | 2025-06-19 10:34:35 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:35.684739 | orchestrator | 2025-06-19 10:34:35 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:38.729921 | orchestrator | 2025-06-19 10:34:38 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:38.732179 | orchestrator | 2025-06-19 10:34:38 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:38.734432 | orchestrator | 2025-06-19 10:34:38 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:38.734468 | orchestrator | 2025-06-19 10:34:38 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:41.778629 | orchestrator | 2025-06-19 10:34:41 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:41.780980 | orchestrator | 2025-06-19 10:34:41 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:41.783532 | orchestrator | 2025-06-19 10:34:41 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:41.784012 | orchestrator | 2025-06-19 10:34:41 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:44.825643 | orchestrator | 2025-06-19 10:34:44 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:44.827555 | orchestrator | 2025-06-19 10:34:44 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:44.829896 | orchestrator | 2025-06-19 10:34:44 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:44.830163 | orchestrator | 2025-06-19 10:34:44 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:47.872628 | orchestrator | 2025-06-19 10:34:47 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:47.874655 | orchestrator | 2025-06-19 10:34:47 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:47.876305 | orchestrator | 2025-06-19 10:34:47 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:47.876343 | orchestrator | 2025-06-19 10:34:47 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:50.920117 | orchestrator | 2025-06-19 10:34:50 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:50.922910 | orchestrator | 2025-06-19 10:34:50 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:50.924594 | orchestrator | 2025-06-19 10:34:50 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:50.924628 | orchestrator | 2025-06-19 10:34:50 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:53.970330 | orchestrator | 2025-06-19 10:34:53 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:53.971975 | orchestrator | 2025-06-19 10:34:53 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:53.973714 | orchestrator | 2025-06-19 10:34:53 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:53.973738 | orchestrator | 2025-06-19 10:34:53 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:34:57.012773 | orchestrator | 2025-06-19 10:34:57 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:34:57.015248 | orchestrator | 2025-06-19 10:34:57 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state STARTED 2025-06-19 10:34:57.017506 | orchestrator | 2025-06-19 10:34:57 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:34:57.017962 | orchestrator | 2025-06-19 10:34:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:00.064401 | orchestrator | 2025-06-19 10:35:00 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:35:00.066313 | orchestrator | 2025-06-19 10:35:00 | INFO  | Task 1603ab84-5989-4715-9023-135c2350bb80 is in state SUCCESS 2025-06-19 10:35:00.067832 | orchestrator | 2025-06-19 10:35:00.067869 | orchestrator | 2025-06-19 10:35:00.067882 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:35:00.067894 | orchestrator | 2025-06-19 10:35:00.067906 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:35:00.067917 | orchestrator | Thursday 19 June 2025 10:32:05 +0000 (0:00:00.265) 0:00:00.265 ********* 2025-06-19 10:35:00.067929 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:00.067941 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:35:00.067952 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:35:00.067963 | orchestrator | 2025-06-19 10:35:00.067974 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:35:00.067984 | orchestrator | Thursday 19 June 2025 10:32:05 +0000 (0:00:00.290) 0:00:00.555 ********* 2025-06-19 10:35:00.067997 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-19 10:35:00.068009 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-19 10:35:00.068019 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-19 10:35:00.068030 | orchestrator | 2025-06-19 10:35:00.068041 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-19 10:35:00.068052 | orchestrator | 2025-06-19 10:35:00.068063 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-19 10:35:00.068074 | orchestrator | Thursday 19 June 2025 10:32:06 +0000 (0:00:00.435) 0:00:00.990 ********* 2025-06-19 10:35:00.068085 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:35:00.068101 | orchestrator | 2025-06-19 10:35:00.068112 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-19 10:35:00.068123 | orchestrator | Thursday 19 June 2025 10:32:06 +0000 (0:00:00.484) 0:00:01.475 ********* 2025-06-19 10:35:00.068134 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-19 10:35:00.068145 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-19 10:35:00.068156 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-19 10:35:00.068166 | orchestrator | 2025-06-19 10:35:00.068177 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-19 10:35:00.068188 | orchestrator | Thursday 19 June 2025 10:32:07 +0000 (0:00:00.648) 0:00:02.123 ********* 2025-06-19 10:35:00.068263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.068306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.068333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.068348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.068368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.068391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.068403 | orchestrator | 2025-06-19 10:35:00.068415 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-19 10:35:00.068426 | orchestrator | Thursday 19 June 2025 10:32:09 +0000 (0:00:01.720) 0:00:03.843 ********* 2025-06-19 10:35:00.068439 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:35:00.068453 | orchestrator | 2025-06-19 10:35:00.068466 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-19 10:35:00.068479 | orchestrator | Thursday 19 June 2025 10:32:09 +0000 (0:00:00.548) 0:00:04.392 ********* 2025-06-19 10:35:00.068501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.068516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.068535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.068558 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.068581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.068596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.068611 | orchestrator | 2025-06-19 10:35:00.068630 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-19 10:35:00.068644 | orchestrator | Thursday 19 June 2025 10:32:12 +0000 (0:00:02.615) 0:00:07.008 ********* 2025-06-19 10:35:00.068662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-19 10:35:00.068676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-19 10:35:00.068691 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:00.068705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-19 10:35:00.068727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-19 10:35:00.068748 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:00.068767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-19 10:35:00.068782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-19 10:35:00.068795 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:00.068806 | orchestrator | 2025-06-19 10:35:00.068817 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-19 10:35:00.068828 | orchestrator | Thursday 19 June 2025 10:32:13 +0000 (0:00:01.097) 0:00:08.105 ********* 2025-06-19 10:35:00.068839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-19 10:35:00.068859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-19 10:35:00.068878 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:00.068894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-19 10:35:00.068907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-19 10:35:00.068919 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:00.068930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-19 10:35:00.068951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-19 10:35:00.068970 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:00.068981 | orchestrator | 2025-06-19 10:35:00.068992 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-19 10:35:00.069003 | orchestrator | Thursday 19 June 2025 10:32:14 +0000 (0:00:01.012) 0:00:09.118 ********* 2025-06-19 10:35:00.069019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.069031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.069043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.069062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.069087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.069105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.069117 | orchestrator | 2025-06-19 10:35:00.069128 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-19 10:35:00.069139 | orchestrator | Thursday 19 June 2025 10:32:16 +0000 (0:00:02.497) 0:00:11.615 ********* 2025-06-19 10:35:00.069150 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:35:00.069161 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:00.069172 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:35:00.069183 | orchestrator | 2025-06-19 10:35:00.069194 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-19 10:35:00.069231 | orchestrator | Thursday 19 June 2025 10:32:20 +0000 (0:00:03.512) 0:00:15.128 ********* 2025-06-19 10:35:00.069243 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:00.069254 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:35:00.069264 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:35:00.069275 | orchestrator | 2025-06-19 10:35:00.069286 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-19 10:35:00.069296 | orchestrator | Thursday 19 June 2025 10:32:22 +0000 (0:00:01.979) 0:00:17.108 ********* 2025-06-19 10:35:00.069308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.069335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.069348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-19 10:35:00.069364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.069377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.069396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-19 10:35:00.069416 | orchestrator | 2025-06-19 10:35:00.069427 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-19 10:35:00.069438 | orchestrator | Thursday 19 June 2025 10:32:24 +0000 (0:00:02.310) 0:00:19.418 ********* 2025-06-19 10:35:00.069449 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:00.069459 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:00.069470 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:00.069481 | orchestrator | 2025-06-19 10:35:00.069491 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-19 10:35:00.069502 | orchestrator | Thursday 19 June 2025 10:32:25 +0000 (0:00:00.309) 0:00:19.727 ********* 2025-06-19 10:35:00.069513 | orchestrator | 2025-06-19 10:35:00.069524 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-19 10:35:00.069534 | orchestrator | Thursday 19 June 2025 10:32:25 +0000 (0:00:00.085) 0:00:19.813 ********* 2025-06-19 10:35:00.069545 | orchestrator | 2025-06-19 10:35:00.069556 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-19 10:35:00.069566 | orchestrator | Thursday 19 June 2025 10:32:25 +0000 (0:00:00.068) 0:00:19.882 ********* 2025-06-19 10:35:00.069577 | orchestrator | 2025-06-19 10:35:00.069588 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-19 10:35:00.069598 | orchestrator | Thursday 19 June 2025 10:32:25 +0000 (0:00:00.085) 0:00:19.967 ********* 2025-06-19 10:35:00.069609 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:00.069620 | orchestrator | 2025-06-19 10:35:00.069635 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-19 10:35:00.069646 | orchestrator | Thursday 19 June 2025 10:32:25 +0000 (0:00:00.613) 0:00:20.581 ********* 2025-06-19 10:35:00.069657 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:00.069668 | orchestrator | 2025-06-19 10:35:00.069678 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-19 10:35:00.069689 | orchestrator | Thursday 19 June 2025 10:32:26 +0000 (0:00:00.216) 0:00:20.797 ********* 2025-06-19 10:35:00.069700 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:00.069710 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:35:00.069731 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:35:00.069751 | orchestrator | 2025-06-19 10:35:00.069770 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-19 10:35:00.069790 | orchestrator | Thursday 19 June 2025 10:33:32 +0000 (0:01:05.878) 0:01:26.676 ********* 2025-06-19 10:35:00.069809 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:00.069827 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:35:00.069845 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:35:00.070144 | orchestrator | 2025-06-19 10:35:00.070165 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-19 10:35:00.070177 | orchestrator | Thursday 19 June 2025 10:34:48 +0000 (0:01:16.033) 0:02:42.710 ********* 2025-06-19 10:35:00.070188 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:35:00.070238 | orchestrator | 2025-06-19 10:35:00.070251 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-19 10:35:00.070261 | orchestrator | Thursday 19 June 2025 10:34:48 +0000 (0:00:00.660) 0:02:43.370 ********* 2025-06-19 10:35:00.070273 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:00.070284 | orchestrator | 2025-06-19 10:35:00.070294 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-19 10:35:00.070305 | orchestrator | Thursday 19 June 2025 10:34:51 +0000 (0:00:02.278) 0:02:45.648 ********* 2025-06-19 10:35:00.070316 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:00.070327 | orchestrator | 2025-06-19 10:35:00.070337 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-19 10:35:00.070348 | orchestrator | Thursday 19 June 2025 10:34:53 +0000 (0:00:02.180) 0:02:47.829 ********* 2025-06-19 10:35:00.070359 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:00.070369 | orchestrator | 2025-06-19 10:35:00.070380 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-19 10:35:00.070391 | orchestrator | Thursday 19 June 2025 10:34:55 +0000 (0:00:02.741) 0:02:50.570 ********* 2025-06-19 10:35:00.070401 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:00.070412 | orchestrator | 2025-06-19 10:35:00.070423 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:35:00.070434 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:35:00.070447 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-19 10:35:00.070458 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-19 10:35:00.070469 | orchestrator | 2025-06-19 10:35:00.070480 | orchestrator | 2025-06-19 10:35:00.070491 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:35:00.070526 | orchestrator | Thursday 19 June 2025 10:34:58 +0000 (0:00:02.368) 0:02:52.939 ********* 2025-06-19 10:35:00.070538 | orchestrator | =============================================================================== 2025-06-19 10:35:00.070548 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 76.03s 2025-06-19 10:35:00.070559 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.88s 2025-06-19 10:35:00.070570 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.51s 2025-06-19 10:35:00.070580 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.74s 2025-06-19 10:35:00.070591 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.62s 2025-06-19 10:35:00.070602 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.50s 2025-06-19 10:35:00.070612 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.37s 2025-06-19 10:35:00.070623 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.31s 2025-06-19 10:35:00.070633 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.28s 2025-06-19 10:35:00.070644 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.18s 2025-06-19 10:35:00.070655 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.98s 2025-06-19 10:35:00.070665 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.72s 2025-06-19 10:35:00.070676 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.10s 2025-06-19 10:35:00.070687 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.01s 2025-06-19 10:35:00.070697 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.66s 2025-06-19 10:35:00.070719 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.65s 2025-06-19 10:35:00.070730 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.61s 2025-06-19 10:35:00.070741 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-06-19 10:35:00.070758 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2025-06-19 10:35:00.070769 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-06-19 10:35:00.070781 | orchestrator | 2025-06-19 10:35:00 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:00.070794 | orchestrator | 2025-06-19 10:35:00 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:03.118548 | orchestrator | 2025-06-19 10:35:03 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:35:03.120491 | orchestrator | 2025-06-19 10:35:03 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:03.120522 | orchestrator | 2025-06-19 10:35:03 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:06.165878 | orchestrator | 2025-06-19 10:35:06 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:35:06.167275 | orchestrator | 2025-06-19 10:35:06 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:06.167317 | orchestrator | 2025-06-19 10:35:06 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:09.214168 | orchestrator | 2025-06-19 10:35:09 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:35:09.215620 | orchestrator | 2025-06-19 10:35:09 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:09.215654 | orchestrator | 2025-06-19 10:35:09 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:12.254542 | orchestrator | 2025-06-19 10:35:12 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:35:12.255555 | orchestrator | 2025-06-19 10:35:12 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:12.255585 | orchestrator | 2025-06-19 10:35:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:15.309089 | orchestrator | 2025-06-19 10:35:15 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state STARTED 2025-06-19 10:35:15.311509 | orchestrator | 2025-06-19 10:35:15 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:15.311808 | orchestrator | 2025-06-19 10:35:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:18.360187 | orchestrator | 2025-06-19 10:35:18 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:18.361437 | orchestrator | 2025-06-19 10:35:18 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:18.364789 | orchestrator | 2025-06-19 10:35:18 | INFO  | Task 2a4dbdac-2535-4c11-9cf7-f0ad4b37d9e3 is in state SUCCESS 2025-06-19 10:35:18.367085 | orchestrator | 2025-06-19 10:35:18.367122 | orchestrator | 2025-06-19 10:35:18.367135 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-19 10:35:18.367148 | orchestrator | 2025-06-19 10:35:18.367189 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-19 10:35:18.367201 | orchestrator | Thursday 19 June 2025 10:32:05 +0000 (0:00:00.103) 0:00:00.103 ********* 2025-06-19 10:35:18.367213 | orchestrator | ok: [localhost] => { 2025-06-19 10:35:18.367225 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-19 10:35:18.367237 | orchestrator | } 2025-06-19 10:35:18.367248 | orchestrator | 2025-06-19 10:35:18.367259 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-19 10:35:18.367292 | orchestrator | Thursday 19 June 2025 10:32:05 +0000 (0:00:00.062) 0:00:00.165 ********* 2025-06-19 10:35:18.367304 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-19 10:35:18.367317 | orchestrator | ...ignoring 2025-06-19 10:35:18.367327 | orchestrator | 2025-06-19 10:35:18.367338 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-19 10:35:18.367349 | orchestrator | Thursday 19 June 2025 10:32:08 +0000 (0:00:02.831) 0:00:02.997 ********* 2025-06-19 10:35:18.367360 | orchestrator | skipping: [localhost] 2025-06-19 10:35:18.367370 | orchestrator | 2025-06-19 10:35:18.367382 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-19 10:35:18.367392 | orchestrator | Thursday 19 June 2025 10:32:08 +0000 (0:00:00.051) 0:00:03.048 ********* 2025-06-19 10:35:18.367403 | orchestrator | ok: [localhost] 2025-06-19 10:35:18.367414 | orchestrator | 2025-06-19 10:35:18.367425 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:35:18.367459 | orchestrator | 2025-06-19 10:35:18.367470 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:35:18.367481 | orchestrator | Thursday 19 June 2025 10:32:08 +0000 (0:00:00.169) 0:00:03.218 ********* 2025-06-19 10:35:18.367492 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.367503 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:35:18.367514 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:35:18.367524 | orchestrator | 2025-06-19 10:35:18.367535 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:35:18.367546 | orchestrator | Thursday 19 June 2025 10:32:08 +0000 (0:00:00.332) 0:00:03.550 ********* 2025-06-19 10:35:18.367557 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-19 10:35:18.367582 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-19 10:35:18.367593 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-19 10:35:18.367604 | orchestrator | 2025-06-19 10:35:18.367614 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-19 10:35:18.367625 | orchestrator | 2025-06-19 10:35:18.367636 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-19 10:35:18.367647 | orchestrator | Thursday 19 June 2025 10:32:09 +0000 (0:00:00.572) 0:00:04.123 ********* 2025-06-19 10:35:18.367657 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-19 10:35:18.367668 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-19 10:35:18.367681 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-19 10:35:18.367693 | orchestrator | 2025-06-19 10:35:18.367705 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-19 10:35:18.367718 | orchestrator | Thursday 19 June 2025 10:32:10 +0000 (0:00:00.528) 0:00:04.651 ********* 2025-06-19 10:35:18.367730 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:35:18.367743 | orchestrator | 2025-06-19 10:35:18.367754 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-19 10:35:18.367766 | orchestrator | Thursday 19 June 2025 10:32:10 +0000 (0:00:00.607) 0:00:05.259 ********* 2025-06-19 10:35:18.367801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-19 10:35:18.367834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-19 10:35:18.367850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-19 10:35:18.367870 | orchestrator | 2025-06-19 10:35:18.367888 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-19 10:35:18.367901 | orchestrator | Thursday 19 June 2025 10:32:13 +0000 (0:00:03.027) 0:00:08.287 ********* 2025-06-19 10:35:18.367914 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.367927 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.367939 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.367951 | orchestrator | 2025-06-19 10:35:18.367963 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-19 10:35:18.367975 | orchestrator | Thursday 19 June 2025 10:32:14 +0000 (0:00:00.704) 0:00:08.992 ********* 2025-06-19 10:35:18.367987 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.368000 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.368012 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.368024 | orchestrator | 2025-06-19 10:35:18.368037 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-19 10:35:18.368049 | orchestrator | Thursday 19 June 2025 10:32:15 +0000 (0:00:01.462) 0:00:10.454 ********* 2025-06-19 10:35:18.368067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-19 10:35:18.368086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-19 10:35:18.368114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-19 10:35:18.368126 | orchestrator | 2025-06-19 10:35:18.368137 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-19 10:35:18.368148 | orchestrator | Thursday 19 June 2025 10:32:20 +0000 (0:00:04.404) 0:00:14.859 ********* 2025-06-19 10:35:18.368190 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.368201 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.368212 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.368222 | orchestrator | 2025-06-19 10:35:18.368233 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-19 10:35:18.368244 | orchestrator | Thursday 19 June 2025 10:32:21 +0000 (0:00:01.178) 0:00:16.038 ********* 2025-06-19 10:35:18.368254 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.368272 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:35:18.368283 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:35:18.368293 | orchestrator | 2025-06-19 10:35:18.368304 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-19 10:35:18.368314 | orchestrator | Thursday 19 June 2025 10:32:25 +0000 (0:00:04.481) 0:00:20.519 ********* 2025-06-19 10:35:18.368325 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:35:18.368336 | orchestrator | 2025-06-19 10:35:18.368346 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-19 10:35:18.368357 | orchestrator | Thursday 19 June 2025 10:32:26 +0000 (0:00:00.626) 0:00:21.146 ********* 2025-06-19 10:35:18.368378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:35:18.368391 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.368408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:35:18.368426 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.368445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:35:18.368457 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.368468 | orchestrator | 2025-06-19 10:35:18.368478 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-19 10:35:18.368489 | orchestrator | Thursday 19 June 2025 10:32:30 +0000 (0:00:03.858) 0:00:25.004 ********* 2025-06-19 10:35:18.368505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:35:18.368523 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.368540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:35:18.368552 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.368568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:35:18.368593 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.368604 | orchestrator | 2025-06-19 10:35:18.368615 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-19 10:35:18.368625 | orchestrator | Thursday 19 June 2025 10:32:33 +0000 (0:00:03.457) 0:00:28.462 ********* 2025-06-19 10:35:18.368637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:35:18.368648 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.368668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:35:18.368692 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.368703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-19 10:35:18.368715 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.368725 | orchestrator | 2025-06-19 10:35:18.368736 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-19 10:35:18.368747 | orchestrator | Thursday 19 June 2025 10:32:36 +0000 (0:00:03.073) 0:00:31.535 ********* 2025-06-19 10:35:18.368766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-19 10:35:18.368790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-19 10:35:18.368811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-19 10:35:18.368824 | orchestrator | 2025-06-19 10:35:18.368835 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-19 10:35:18.368846 | orchestrator | Thursday 19 June 2025 10:32:40 +0000 (0:00:03.868) 0:00:35.404 ********* 2025-06-19 10:35:18.368862 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.368873 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:35:18.368884 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:35:18.368894 | orchestrator | 2025-06-19 10:35:18.368905 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-19 10:35:18.368916 | orchestrator | Thursday 19 June 2025 10:32:42 +0000 (0:00:01.160) 0:00:36.564 ********* 2025-06-19 10:35:18.368926 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.368937 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:35:18.368948 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:35:18.368958 | orchestrator | 2025-06-19 10:35:18.368969 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-19 10:35:18.368980 | orchestrator | Thursday 19 June 2025 10:32:42 +0000 (0:00:00.305) 0:00:36.870 ********* 2025-06-19 10:35:18.368995 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.369006 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:35:18.369016 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:35:18.369027 | orchestrator | 2025-06-19 10:35:18.369038 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-19 10:35:18.369049 | orchestrator | Thursday 19 June 2025 10:32:42 +0000 (0:00:00.316) 0:00:37.186 ********* 2025-06-19 10:35:18.369060 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-19 10:35:18.369071 | orchestrator | ...ignoring 2025-06-19 10:35:18.369082 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-19 10:35:18.369093 | orchestrator | ...ignoring 2025-06-19 10:35:18.369104 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-19 10:35:18.369114 | orchestrator | ...ignoring 2025-06-19 10:35:18.369125 | orchestrator | 2025-06-19 10:35:18.369135 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-19 10:35:18.369146 | orchestrator | Thursday 19 June 2025 10:32:53 +0000 (0:00:10.966) 0:00:48.152 ********* 2025-06-19 10:35:18.369172 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.369183 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:35:18.369194 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:35:18.369204 | orchestrator | 2025-06-19 10:35:18.369215 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-19 10:35:18.369226 | orchestrator | Thursday 19 June 2025 10:32:53 +0000 (0:00:00.400) 0:00:48.553 ********* 2025-06-19 10:35:18.369236 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.369247 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.369257 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.369268 | orchestrator | 2025-06-19 10:35:18.369279 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-19 10:35:18.369289 | orchestrator | Thursday 19 June 2025 10:32:54 +0000 (0:00:00.719) 0:00:49.272 ********* 2025-06-19 10:35:18.369300 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.369311 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.369321 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.369332 | orchestrator | 2025-06-19 10:35:18.369342 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-19 10:35:18.369353 | orchestrator | Thursday 19 June 2025 10:32:55 +0000 (0:00:00.426) 0:00:49.699 ********* 2025-06-19 10:35:18.369363 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.369374 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.369384 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.369395 | orchestrator | 2025-06-19 10:35:18.369406 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-19 10:35:18.369416 | orchestrator | Thursday 19 June 2025 10:32:55 +0000 (0:00:00.403) 0:00:50.103 ********* 2025-06-19 10:35:18.369427 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.369456 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:35:18.369467 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:35:18.369478 | orchestrator | 2025-06-19 10:35:18.369488 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-19 10:35:18.369499 | orchestrator | Thursday 19 June 2025 10:32:55 +0000 (0:00:00.403) 0:00:50.507 ********* 2025-06-19 10:35:18.369515 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.369526 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.369537 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.369547 | orchestrator | 2025-06-19 10:35:18.369558 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-19 10:35:18.369568 | orchestrator | Thursday 19 June 2025 10:32:56 +0000 (0:00:00.663) 0:00:51.170 ********* 2025-06-19 10:35:18.369579 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.369590 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.369601 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-19 10:35:18.369611 | orchestrator | 2025-06-19 10:35:18.369622 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-19 10:35:18.369633 | orchestrator | Thursday 19 June 2025 10:32:56 +0000 (0:00:00.362) 0:00:51.533 ********* 2025-06-19 10:35:18.369643 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.369654 | orchestrator | 2025-06-19 10:35:18.369664 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-19 10:35:18.369675 | orchestrator | Thursday 19 June 2025 10:33:07 +0000 (0:00:10.358) 0:01:01.892 ********* 2025-06-19 10:35:18.369686 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.369696 | orchestrator | 2025-06-19 10:35:18.369707 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-19 10:35:18.369718 | orchestrator | Thursday 19 June 2025 10:33:07 +0000 (0:00:00.137) 0:01:02.029 ********* 2025-06-19 10:35:18.369728 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.369739 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.369750 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.369760 | orchestrator | 2025-06-19 10:35:18.369771 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-19 10:35:18.369781 | orchestrator | Thursday 19 June 2025 10:33:08 +0000 (0:00:00.962) 0:01:02.991 ********* 2025-06-19 10:35:18.369792 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.369803 | orchestrator | 2025-06-19 10:35:18.369813 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-19 10:35:18.369824 | orchestrator | Thursday 19 June 2025 10:33:16 +0000 (0:00:07.647) 0:01:10.639 ********* 2025-06-19 10:35:18.369835 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.369845 | orchestrator | 2025-06-19 10:35:18.369856 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-19 10:35:18.369867 | orchestrator | Thursday 19 June 2025 10:33:17 +0000 (0:00:01.573) 0:01:12.212 ********* 2025-06-19 10:35:18.369877 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.369888 | orchestrator | 2025-06-19 10:35:18.369903 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-19 10:35:18.369914 | orchestrator | Thursday 19 June 2025 10:33:20 +0000 (0:00:02.506) 0:01:14.718 ********* 2025-06-19 10:35:18.369925 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.369936 | orchestrator | 2025-06-19 10:35:18.369946 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-19 10:35:18.369957 | orchestrator | Thursday 19 June 2025 10:33:20 +0000 (0:00:00.126) 0:01:14.845 ********* 2025-06-19 10:35:18.369968 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.369978 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.369989 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.370000 | orchestrator | 2025-06-19 10:35:18.370010 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-19 10:35:18.370074 | orchestrator | Thursday 19 June 2025 10:33:20 +0000 (0:00:00.487) 0:01:15.333 ********* 2025-06-19 10:35:18.370092 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.370103 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-19 10:35:18.370114 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:35:18.370124 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:35:18.370135 | orchestrator | 2025-06-19 10:35:18.370146 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-19 10:35:18.370207 | orchestrator | skipping: no hosts matched 2025-06-19 10:35:18.370220 | orchestrator | 2025-06-19 10:35:18.370231 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-19 10:35:18.370242 | orchestrator | 2025-06-19 10:35:18.370253 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-19 10:35:18.370263 | orchestrator | Thursday 19 June 2025 10:33:21 +0000 (0:00:00.334) 0:01:15.668 ********* 2025-06-19 10:35:18.370274 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:35:18.370285 | orchestrator | 2025-06-19 10:35:18.370296 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-19 10:35:18.370307 | orchestrator | Thursday 19 June 2025 10:33:43 +0000 (0:00:22.122) 0:01:37.790 ********* 2025-06-19 10:35:18.370317 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:35:18.370328 | orchestrator | 2025-06-19 10:35:18.370339 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-19 10:35:18.370350 | orchestrator | Thursday 19 June 2025 10:33:58 +0000 (0:00:15.625) 0:01:53.416 ********* 2025-06-19 10:35:18.370360 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:35:18.370371 | orchestrator | 2025-06-19 10:35:18.370382 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-19 10:35:18.370393 | orchestrator | 2025-06-19 10:35:18.370404 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-19 10:35:18.370414 | orchestrator | Thursday 19 June 2025 10:34:01 +0000 (0:00:02.409) 0:01:55.826 ********* 2025-06-19 10:35:18.370425 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:35:18.370436 | orchestrator | 2025-06-19 10:35:18.370446 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-19 10:35:18.370457 | orchestrator | Thursday 19 June 2025 10:34:21 +0000 (0:00:19.802) 0:02:15.629 ********* 2025-06-19 10:35:18.370468 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:35:18.370479 | orchestrator | 2025-06-19 10:35:18.370490 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-19 10:35:18.370500 | orchestrator | Thursday 19 June 2025 10:34:41 +0000 (0:00:20.584) 0:02:36.213 ********* 2025-06-19 10:35:18.370511 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:35:18.370522 | orchestrator | 2025-06-19 10:35:18.370533 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-19 10:35:18.370543 | orchestrator | 2025-06-19 10:35:18.370561 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-19 10:35:18.370573 | orchestrator | Thursday 19 June 2025 10:34:44 +0000 (0:00:02.511) 0:02:38.725 ********* 2025-06-19 10:35:18.370583 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.370594 | orchestrator | 2025-06-19 10:35:18.370604 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-19 10:35:18.370615 | orchestrator | Thursday 19 June 2025 10:35:00 +0000 (0:00:16.273) 0:02:54.998 ********* 2025-06-19 10:35:18.370626 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.370636 | orchestrator | 2025-06-19 10:35:18.370647 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-19 10:35:18.370657 | orchestrator | Thursday 19 June 2025 10:35:01 +0000 (0:00:00.896) 0:02:55.895 ********* 2025-06-19 10:35:18.370668 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.370679 | orchestrator | 2025-06-19 10:35:18.370689 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-19 10:35:18.370699 | orchestrator | 2025-06-19 10:35:18.370708 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-19 10:35:18.370724 | orchestrator | Thursday 19 June 2025 10:35:03 +0000 (0:00:02.394) 0:02:58.290 ********* 2025-06-19 10:35:18.370733 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:35:18.370743 | orchestrator | 2025-06-19 10:35:18.370752 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-19 10:35:18.370761 | orchestrator | Thursday 19 June 2025 10:35:04 +0000 (0:00:00.523) 0:02:58.813 ********* 2025-06-19 10:35:18.370770 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.370780 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.370789 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.370799 | orchestrator | 2025-06-19 10:35:18.370808 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-19 10:35:18.370818 | orchestrator | Thursday 19 June 2025 10:35:06 +0000 (0:00:02.375) 0:03:01.189 ********* 2025-06-19 10:35:18.370827 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.370837 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.370846 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.370855 | orchestrator | 2025-06-19 10:35:18.370865 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-19 10:35:18.370874 | orchestrator | Thursday 19 June 2025 10:35:08 +0000 (0:00:02.239) 0:03:03.429 ********* 2025-06-19 10:35:18.370883 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.370893 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.370902 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.370912 | orchestrator | 2025-06-19 10:35:18.370926 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-19 10:35:18.370936 | orchestrator | Thursday 19 June 2025 10:35:10 +0000 (0:00:02.054) 0:03:05.483 ********* 2025-06-19 10:35:18.370945 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.370954 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.370964 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:35:18.370973 | orchestrator | 2025-06-19 10:35:18.370983 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-19 10:35:18.370992 | orchestrator | Thursday 19 June 2025 10:35:13 +0000 (0:00:02.152) 0:03:07.635 ********* 2025-06-19 10:35:18.371001 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:35:18.371010 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:35:18.371020 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:35:18.371029 | orchestrator | 2025-06-19 10:35:18.371039 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-19 10:35:18.371048 | orchestrator | Thursday 19 June 2025 10:35:16 +0000 (0:00:03.016) 0:03:10.652 ********* 2025-06-19 10:35:18.371057 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:35:18.371067 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:35:18.371076 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:35:18.371085 | orchestrator | 2025-06-19 10:35:18.371095 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:35:18.371104 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-19 10:35:18.371114 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-19 10:35:18.371126 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-19 10:35:18.371135 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-19 10:35:18.371145 | orchestrator | 2025-06-19 10:35:18.371168 | orchestrator | 2025-06-19 10:35:18.371178 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:35:18.371187 | orchestrator | Thursday 19 June 2025 10:35:16 +0000 (0:00:00.245) 0:03:10.898 ********* 2025-06-19 10:35:18.371202 | orchestrator | =============================================================================== 2025-06-19 10:35:18.371212 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 41.93s 2025-06-19 10:35:18.371221 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.21s 2025-06-19 10:35:18.371230 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.27s 2025-06-19 10:35:18.371240 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.97s 2025-06-19 10:35:18.371249 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.36s 2025-06-19 10:35:18.371259 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.65s 2025-06-19 10:35:18.371273 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.92s 2025-06-19 10:35:18.371283 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.48s 2025-06-19 10:35:18.371292 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.40s 2025-06-19 10:35:18.371301 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.87s 2025-06-19 10:35:18.371311 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.86s 2025-06-19 10:35:18.371320 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.46s 2025-06-19 10:35:18.371330 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.07s 2025-06-19 10:35:18.371339 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.03s 2025-06-19 10:35:18.371349 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.02s 2025-06-19 10:35:18.371358 | orchestrator | Check MariaDB service --------------------------------------------------- 2.83s 2025-06-19 10:35:18.371367 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.51s 2025-06-19 10:35:18.371376 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.39s 2025-06-19 10:35:18.371386 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.38s 2025-06-19 10:35:18.371395 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.24s 2025-06-19 10:35:18.371404 | orchestrator | 2025-06-19 10:35:18 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:18.371414 | orchestrator | 2025-06-19 10:35:18 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:21.425035 | orchestrator | 2025-06-19 10:35:21 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:21.426434 | orchestrator | 2025-06-19 10:35:21 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:21.426468 | orchestrator | 2025-06-19 10:35:21 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:21.426480 | orchestrator | 2025-06-19 10:35:21 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:24.476897 | orchestrator | 2025-06-19 10:35:24 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:24.478448 | orchestrator | 2025-06-19 10:35:24 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:24.479978 | orchestrator | 2025-06-19 10:35:24 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:24.480398 | orchestrator | 2025-06-19 10:35:24 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:27.516115 | orchestrator | 2025-06-19 10:35:27 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:27.516663 | orchestrator | 2025-06-19 10:35:27 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:27.518108 | orchestrator | 2025-06-19 10:35:27 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:27.518217 | orchestrator | 2025-06-19 10:35:27 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:30.567341 | orchestrator | 2025-06-19 10:35:30 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:30.568008 | orchestrator | 2025-06-19 10:35:30 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:30.568799 | orchestrator | 2025-06-19 10:35:30 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:30.569097 | orchestrator | 2025-06-19 10:35:30 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:33.608710 | orchestrator | 2025-06-19 10:35:33 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:33.610295 | orchestrator | 2025-06-19 10:35:33 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:33.612166 | orchestrator | 2025-06-19 10:35:33 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:33.612279 | orchestrator | 2025-06-19 10:35:33 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:36.647008 | orchestrator | 2025-06-19 10:35:36 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:36.647961 | orchestrator | 2025-06-19 10:35:36 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:36.648062 | orchestrator | 2025-06-19 10:35:36 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:36.648244 | orchestrator | 2025-06-19 10:35:36 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:39.699212 | orchestrator | 2025-06-19 10:35:39 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:39.700503 | orchestrator | 2025-06-19 10:35:39 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:39.703169 | orchestrator | 2025-06-19 10:35:39 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:39.703558 | orchestrator | 2025-06-19 10:35:39 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:42.744528 | orchestrator | 2025-06-19 10:35:42 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:42.745009 | orchestrator | 2025-06-19 10:35:42 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:42.746299 | orchestrator | 2025-06-19 10:35:42 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state STARTED 2025-06-19 10:35:42.746325 | orchestrator | 2025-06-19 10:35:42 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:45.782621 | orchestrator | 2025-06-19 10:35:45 | INFO  | Task fa1324ed-f000-41a3-bfae-80a02b92beff is in state STARTED 2025-06-19 10:35:45.782733 | orchestrator | 2025-06-19 10:35:45 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:45.782749 | orchestrator | 2025-06-19 10:35:45 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:45.782761 | orchestrator | 2025-06-19 10:35:45 | INFO  | Task 0b95557d-06ff-4b5f-bfb8-a28961fe0728 is in state SUCCESS 2025-06-19 10:35:45.783798 | orchestrator | 2025-06-19 10:35:45.784495 | orchestrator | 2025-06-19 10:35:45.784517 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-19 10:35:45.784530 | orchestrator | 2025-06-19 10:35:45.784542 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-19 10:35:45.784554 | orchestrator | Thursday 19 June 2025 10:33:38 +0000 (0:00:00.567) 0:00:00.567 ********* 2025-06-19 10:35:45.784589 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:35:45.784601 | orchestrator | 2025-06-19 10:35:45.784612 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-19 10:35:45.784637 | orchestrator | Thursday 19 June 2025 10:33:38 +0000 (0:00:00.564) 0:00:01.132 ********* 2025-06-19 10:35:45.784649 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.784661 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.784671 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.784682 | orchestrator | 2025-06-19 10:35:45.784693 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-19 10:35:45.784703 | orchestrator | Thursday 19 June 2025 10:33:39 +0000 (0:00:00.602) 0:00:01.735 ********* 2025-06-19 10:35:45.784714 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.784725 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.784735 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.784746 | orchestrator | 2025-06-19 10:35:45.784756 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-19 10:35:45.784767 | orchestrator | Thursday 19 June 2025 10:33:39 +0000 (0:00:00.322) 0:00:02.057 ********* 2025-06-19 10:35:45.784777 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.784788 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.784798 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.784809 | orchestrator | 2025-06-19 10:35:45.784819 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-19 10:35:45.784829 | orchestrator | Thursday 19 June 2025 10:33:40 +0000 (0:00:00.799) 0:00:02.857 ********* 2025-06-19 10:35:45.784840 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.784851 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.784861 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.784871 | orchestrator | 2025-06-19 10:35:45.784882 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-19 10:35:45.784892 | orchestrator | Thursday 19 June 2025 10:33:40 +0000 (0:00:00.265) 0:00:03.122 ********* 2025-06-19 10:35:45.784903 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.784913 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.784923 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.784934 | orchestrator | 2025-06-19 10:35:45.784944 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-19 10:35:45.784955 | orchestrator | Thursday 19 June 2025 10:33:41 +0000 (0:00:00.254) 0:00:03.377 ********* 2025-06-19 10:35:45.784965 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.784976 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.784986 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.784997 | orchestrator | 2025-06-19 10:35:45.785007 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-19 10:35:45.785018 | orchestrator | Thursday 19 June 2025 10:33:41 +0000 (0:00:00.296) 0:00:03.673 ********* 2025-06-19 10:35:45.785028 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.785039 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.785050 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.785060 | orchestrator | 2025-06-19 10:35:45.785071 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-19 10:35:45.785082 | orchestrator | Thursday 19 June 2025 10:33:41 +0000 (0:00:00.404) 0:00:04.078 ********* 2025-06-19 10:35:45.785133 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.785147 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.785158 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.785170 | orchestrator | 2025-06-19 10:35:45.785183 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-19 10:35:45.785195 | orchestrator | Thursday 19 June 2025 10:33:42 +0000 (0:00:00.297) 0:00:04.375 ********* 2025-06-19 10:35:45.785207 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-19 10:35:45.785219 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-19 10:35:45.785239 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-19 10:35:45.785251 | orchestrator | 2025-06-19 10:35:45.785263 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-19 10:35:45.785275 | orchestrator | Thursday 19 June 2025 10:33:42 +0000 (0:00:00.597) 0:00:04.972 ********* 2025-06-19 10:35:45.785287 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.785299 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.785312 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.785323 | orchestrator | 2025-06-19 10:35:45.785336 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-19 10:35:45.785348 | orchestrator | Thursday 19 June 2025 10:33:43 +0000 (0:00:00.408) 0:00:05.381 ********* 2025-06-19 10:35:45.785360 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-19 10:35:45.785371 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-19 10:35:45.785382 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-19 10:35:45.785393 | orchestrator | 2025-06-19 10:35:45.785403 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-19 10:35:45.785414 | orchestrator | Thursday 19 June 2025 10:33:45 +0000 (0:00:02.108) 0:00:07.489 ********* 2025-06-19 10:35:45.785424 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-19 10:35:45.785435 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-19 10:35:45.785446 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-19 10:35:45.785456 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.785467 | orchestrator | 2025-06-19 10:35:45.785478 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-19 10:35:45.785535 | orchestrator | Thursday 19 June 2025 10:33:45 +0000 (0:00:00.411) 0:00:07.901 ********* 2025-06-19 10:35:45.785550 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.785571 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.785582 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.785593 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.785604 | orchestrator | 2025-06-19 10:35:45.785615 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-19 10:35:45.785626 | orchestrator | Thursday 19 June 2025 10:33:46 +0000 (0:00:00.766) 0:00:08.667 ********* 2025-06-19 10:35:45.785639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.785653 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.785664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.785682 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.785693 | orchestrator | 2025-06-19 10:35:45.785704 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-19 10:35:45.785715 | orchestrator | Thursday 19 June 2025 10:33:46 +0000 (0:00:00.160) 0:00:08.827 ********* 2025-06-19 10:35:45.785728 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c7181e2bbd59', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-19 10:33:43.700534', 'end': '2025-06-19 10:33:43.742065', 'delta': '0:00:00.041531', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c7181e2bbd59'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-19 10:35:45.785743 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9033b53796cd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-19 10:33:44.405539', 'end': '2025-06-19 10:33:44.448203', 'delta': '0:00:00.042664', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9033b53796cd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-19 10:35:45.785787 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e0631e3e8f22', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-19 10:33:44.962799', 'end': '2025-06-19 10:33:45.006532', 'delta': '0:00:00.043733', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e0631e3e8f22'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-19 10:35:45.785801 | orchestrator | 2025-06-19 10:35:45.785812 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-19 10:35:45.785823 | orchestrator | Thursday 19 June 2025 10:33:46 +0000 (0:00:00.366) 0:00:09.194 ********* 2025-06-19 10:35:45.785834 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.785845 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.785855 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.785866 | orchestrator | 2025-06-19 10:35:45.785877 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-19 10:35:45.785887 | orchestrator | Thursday 19 June 2025 10:33:47 +0000 (0:00:00.446) 0:00:09.641 ********* 2025-06-19 10:35:45.785898 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-19 10:35:45.785909 | orchestrator | 2025-06-19 10:35:45.785920 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-19 10:35:45.785931 | orchestrator | Thursday 19 June 2025 10:33:50 +0000 (0:00:02.746) 0:00:12.388 ********* 2025-06-19 10:35:45.785948 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.785959 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.785970 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.785980 | orchestrator | 2025-06-19 10:35:45.785991 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-19 10:35:45.786002 | orchestrator | Thursday 19 June 2025 10:33:50 +0000 (0:00:00.320) 0:00:12.709 ********* 2025-06-19 10:35:45.786012 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.786083 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.786117 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.786128 | orchestrator | 2025-06-19 10:35:45.786225 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-19 10:35:45.786249 | orchestrator | Thursday 19 June 2025 10:33:50 +0000 (0:00:00.405) 0:00:13.114 ********* 2025-06-19 10:35:45.786260 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.786270 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.786281 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.786291 | orchestrator | 2025-06-19 10:35:45.786302 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-19 10:35:45.786313 | orchestrator | Thursday 19 June 2025 10:33:51 +0000 (0:00:00.479) 0:00:13.594 ********* 2025-06-19 10:35:45.786324 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.786334 | orchestrator | 2025-06-19 10:35:45.786345 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-19 10:35:45.786356 | orchestrator | Thursday 19 June 2025 10:33:51 +0000 (0:00:00.131) 0:00:13.725 ********* 2025-06-19 10:35:45.786366 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.786377 | orchestrator | 2025-06-19 10:35:45.786387 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-19 10:35:45.786398 | orchestrator | Thursday 19 June 2025 10:33:51 +0000 (0:00:00.222) 0:00:13.948 ********* 2025-06-19 10:35:45.786408 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.786419 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.786429 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.786440 | orchestrator | 2025-06-19 10:35:45.786450 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-19 10:35:45.786461 | orchestrator | Thursday 19 June 2025 10:33:51 +0000 (0:00:00.281) 0:00:14.229 ********* 2025-06-19 10:35:45.786471 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.786482 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.786493 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.786503 | orchestrator | 2025-06-19 10:35:45.786513 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-19 10:35:45.786524 | orchestrator | Thursday 19 June 2025 10:33:52 +0000 (0:00:00.312) 0:00:14.542 ********* 2025-06-19 10:35:45.786535 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.786545 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.786556 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.786566 | orchestrator | 2025-06-19 10:35:45.786577 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-19 10:35:45.786587 | orchestrator | Thursday 19 June 2025 10:33:52 +0000 (0:00:00.454) 0:00:14.997 ********* 2025-06-19 10:35:45.786598 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.786608 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.786619 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.786629 | orchestrator | 2025-06-19 10:35:45.786640 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-19 10:35:45.786650 | orchestrator | Thursday 19 June 2025 10:33:52 +0000 (0:00:00.317) 0:00:15.315 ********* 2025-06-19 10:35:45.786661 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.786671 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.786682 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.786692 | orchestrator | 2025-06-19 10:35:45.786703 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-19 10:35:45.786722 | orchestrator | Thursday 19 June 2025 10:33:53 +0000 (0:00:00.313) 0:00:15.628 ********* 2025-06-19 10:35:45.786733 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.786743 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.786754 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.786765 | orchestrator | 2025-06-19 10:35:45.786776 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-19 10:35:45.786830 | orchestrator | Thursday 19 June 2025 10:33:53 +0000 (0:00:00.317) 0:00:15.946 ********* 2025-06-19 10:35:45.786843 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.786854 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.786864 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.786875 | orchestrator | 2025-06-19 10:35:45.786885 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-19 10:35:45.786896 | orchestrator | Thursday 19 June 2025 10:33:54 +0000 (0:00:00.472) 0:00:16.418 ********* 2025-06-19 10:35:45.786913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f69fe47--683a--554f--92f7--031e2a26df27-osd--block--3f69fe47--683a--554f--92f7--031e2a26df27', 'dm-uuid-LVM-3FbwjtgmDfoYI2HFMVZ7etdFItcyZ0uA120tcIDhw0ksPX9thSpC6lMqPATNVSsQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.786927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04cfa187--5820--5d05--93de--747bac6f19c1-osd--block--04cfa187--5820--5d05--93de--747bac6f19c1', 'dm-uuid-LVM-MJpIKKReme2cd0ENNcgCir2ui8Foeckc0XwTlPW1Xwlo0ltlGpmVFKsVBWqITycn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.786939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.786951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.786962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.786974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.786991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787035 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787048 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787080 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3f69fe47--683a--554f--92f7--031e2a26df27-osd--block--3f69fe47--683a--554f--92f7--031e2a26df27'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-97R3N7-2A34-s4Zc-sU9t-FfDM-jVwa-FScsR4', 'scsi-0QEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581', 'scsi-SQEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ed986be--d550--5e98--86ee--1d899c3b1ca9-osd--block--6ed986be--d550--5e98--86ee--1d899c3b1ca9', 'dm-uuid-LVM-6UuOoFyYjJqfS0KyCKdOA2ZjxgfYaumB0JzZSfzm5xIlni9r9FK3ddaezn5i3pKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--04cfa187--5820--5d05--93de--747bac6f19c1-osd--block--04cfa187--5820--5d05--93de--747bac6f19c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-03tnYq-6ggS-I1wM-HNLR-s7cp-1W3b-GMFoGB', 'scsi-0QEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c', 'scsi-SQEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79abc216--b4ba--5883--a19f--da26bd64d731-osd--block--79abc216--b4ba--5883--a19f--da26bd64d731', 'dm-uuid-LVM-5u9SY1ubYuxxI9hO0nIhcA10D1Nk4CPofYdDZ1RLD6MDXShcpLtHCCGGR0BEV1H9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658', 'scsi-SQEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787290 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787363 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part1', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part14', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part15', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part16', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787438 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.787453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6ed986be--d550--5e98--86ee--1d899c3b1ca9-osd--block--6ed986be--d550--5e98--86ee--1d899c3b1ca9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5cZ3bI-yphK-DX0j-eq18-5KjP-2qsX-orgfUW', 'scsi-0QEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74', 'scsi-SQEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--79abc216--b4ba--5883--a19f--da26bd64d731-osd--block--79abc216--b4ba--5883--a19f--da26bd64d731'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-G13f9C-LH5R-OpPS-YXca-r93x-U6vC-ldTAYB', 'scsi-0QEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175', 'scsi-SQEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b', 'scsi-SQEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787498 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.787508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c3fffd7--e076--56d5--815a--37625d7b3693-osd--block--3c3fffd7--e076--56d5--815a--37625d7b3693', 'dm-uuid-LVM-iMc2qlTJtEt456uyjM8G66T1ryw1zEFJCntoOPXDuhR1TXYKYg92dRvX78kvQfdl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eebf63d4--54bc--5b4a--b141--3683d252bf06-osd--block--eebf63d4--54bc--5b4a--b141--3683d252bf06', 'dm-uuid-LVM-Hf8bPfljEZOuC036yY59Zp1iEAlpSTQeymxFXnO5uCz0J3Xzp4Fl7CDhANxyEEzq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-19 10:35:45.787640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3c3fffd7--e076--56d5--815a--37625d7b3693-osd--block--3c3fffd7--e076--56d5--815a--37625d7b3693'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-flVRpd-9uE6-rKaD-ogui-ysZt-MUo1-sX3ca6', 'scsi-0QEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f', 'scsi-SQEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787668 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--eebf63d4--54bc--5b4a--b141--3683d252bf06-osd--block--eebf63d4--54bc--5b4a--b141--3683d252bf06'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MDMvGv-SNh8-xR9R-r3OZ-Mfce-jcBT-mY01Ah', 'scsi-0QEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d', 'scsi-SQEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6', 'scsi-SQEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-19 10:35:45.787703 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.787713 | orchestrator | 2025-06-19 10:35:45.787722 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-19 10:35:45.787732 | orchestrator | Thursday 19 June 2025 10:33:54 +0000 (0:00:00.557) 0:00:16.976 ********* 2025-06-19 10:35:45.787747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3f69fe47--683a--554f--92f7--031e2a26df27-osd--block--3f69fe47--683a--554f--92f7--031e2a26df27', 'dm-uuid-LVM-3FbwjtgmDfoYI2HFMVZ7etdFItcyZ0uA120tcIDhw0ksPX9thSpC6lMqPATNVSsQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--04cfa187--5820--5d05--93de--747bac6f19c1-osd--block--04cfa187--5820--5d05--93de--747bac6f19c1', 'dm-uuid-LVM-MJpIKKReme2cd0ENNcgCir2ui8Foeckc0XwTlPW1Xwlo0ltlGpmVFKsVBWqITycn'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787774 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787783 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787793 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787810 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787834 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787844 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787861 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6ed986be--d550--5e98--86ee--1d899c3b1ca9-osd--block--6ed986be--d550--5e98--86ee--1d899c3b1ca9', 'dm-uuid-LVM-6UuOoFyYjJqfS0KyCKdOA2ZjxgfYaumB0JzZSfzm5xIlni9r9FK3ddaezn5i3pKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787871 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787887 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--79abc216--b4ba--5883--a19f--da26bd64d731-osd--block--79abc216--b4ba--5883--a19f--da26bd64d731', 'dm-uuid-LVM-5u9SY1ubYuxxI9hO0nIhcA10D1Nk4CPofYdDZ1RLD6MDXShcpLtHCCGGR0BEV1H9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787903 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part1', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part14', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part15', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part16', 'scsi-SQEMU_QEMU_HARDDISK_236643a8-3fbf-4a38-ac5c-7d15a0179c3a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787930 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3f69fe47--683a--554f--92f7--031e2a26df27-osd--block--3f69fe47--683a--554f--92f7--031e2a26df27'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-97R3N7-2A34-s4Zc-sU9t-FfDM-jVwa-FScsR4', 'scsi-0QEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581', 'scsi-SQEMU_QEMU_HARDDISK_5fba7027-7a45-483b-8644-e0c0ef304581'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787963 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--04cfa187--5820--5d05--93de--747bac6f19c1-osd--block--04cfa187--5820--5d05--93de--747bac6f19c1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-03tnYq-6ggS-I1wM-HNLR-s7cp-1W3b-GMFoGB', 'scsi-0QEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c', 'scsi-SQEMU_QEMU_HARDDISK_5cdb3fff-d4f1-405f-abd7-b446ee32738c'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787979 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787989 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658', 'scsi-SQEMU_QEMU_HARDDISK_6c4f0114-96df-472d-8cd2-75acad9ce658'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.787999 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788015 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788025 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.788039 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788114 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part1', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part14', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part15', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part16', 'scsi-SQEMU_QEMU_HARDDISK_32c85e8d-b71e-43db-9ec2-d353b455abf6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788126 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6ed986be--d550--5e98--86ee--1d899c3b1ca9-osd--block--6ed986be--d550--5e98--86ee--1d899c3b1ca9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5cZ3bI-yphK-DX0j-eq18-5KjP-2qsX-orgfUW', 'scsi-0QEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74', 'scsi-SQEMU_QEMU_HARDDISK_6a40ab2f-d460-475a-85e2-5470cb1f2b74'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788143 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3c3fffd7--e076--56d5--815a--37625d7b3693-osd--block--3c3fffd7--e076--56d5--815a--37625d7b3693', 'dm-uuid-LVM-iMc2qlTJtEt456uyjM8G66T1ryw1zEFJCntoOPXDuhR1TXYKYg92dRvX78kvQfdl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788153 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--79abc216--b4ba--5883--a19f--da26bd64d731-osd--block--79abc216--b4ba--5883--a19f--da26bd64d731'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-G13f9C-LH5R-OpPS-YXca-r93x-U6vC-ldTAYB', 'scsi-0QEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175', 'scsi-SQEMU_QEMU_HARDDISK_38f445f8-bcf4-4b54-8d34-faf3abd36175'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788171 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--eebf63d4--54bc--5b4a--b141--3683d252bf06-osd--block--eebf63d4--54bc--5b4a--b141--3683d252bf06', 'dm-uuid-LVM-Hf8bPfljEZOuC036yY59Zp1iEAlpSTQeymxFXnO5uCz0J3Xzp4Fl7CDhANxyEEzq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788185 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b', 'scsi-SQEMU_QEMU_HARDDISK_2f17817e-651e-4f9a-8129-c3db8254ad0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788214 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788224 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788234 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.788244 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788259 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788289 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788299 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788308 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part1', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part14', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part15', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part16', 'scsi-SQEMU_QEMU_HARDDISK_d3db73c4-91fc-4185-92a8-f3f49747b38e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788348 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3c3fffd7--e076--56d5--815a--37625d7b3693-osd--block--3c3fffd7--e076--56d5--815a--37625d7b3693'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-flVRpd-9uE6-rKaD-ogui-ysZt-MUo1-sX3ca6', 'scsi-0QEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f', 'scsi-SQEMU_QEMU_HARDDISK_1ab95973-8f65-40ad-b4e2-5ebf4e7cdc3f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788358 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--eebf63d4--54bc--5b4a--b141--3683d252bf06-osd--block--eebf63d4--54bc--5b4a--b141--3683d252bf06'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-MDMvGv-SNh8-xR9R-r3OZ-Mfce-jcBT-mY01Ah', 'scsi-0QEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d', 'scsi-SQEMU_QEMU_HARDDISK_d7da1435-c5c9-4327-bd6f-1fcfb647c27d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788369 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6', 'scsi-SQEMU_QEMU_HARDDISK_48d47195-a07b-47d0-b7e6-8f07488663d6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788384 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-19-09-43-29-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-19 10:35:45.788394 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.788403 | orchestrator | 2025-06-19 10:35:45.788413 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-19 10:35:45.788427 | orchestrator | Thursday 19 June 2025 10:33:55 +0000 (0:00:00.610) 0:00:17.587 ********* 2025-06-19 10:35:45.788444 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.788453 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.788463 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.788472 | orchestrator | 2025-06-19 10:35:45.788482 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-19 10:35:45.788491 | orchestrator | Thursday 19 June 2025 10:33:55 +0000 (0:00:00.673) 0:00:18.260 ********* 2025-06-19 10:35:45.788501 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.788510 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.788519 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.788529 | orchestrator | 2025-06-19 10:35:45.788538 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-19 10:35:45.788548 | orchestrator | Thursday 19 June 2025 10:33:56 +0000 (0:00:00.463) 0:00:18.723 ********* 2025-06-19 10:35:45.788557 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.788566 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.788576 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.788585 | orchestrator | 2025-06-19 10:35:45.788595 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-19 10:35:45.788604 | orchestrator | Thursday 19 June 2025 10:33:57 +0000 (0:00:00.655) 0:00:19.379 ********* 2025-06-19 10:35:45.788614 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.788623 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.788633 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.788642 | orchestrator | 2025-06-19 10:35:45.788651 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-19 10:35:45.788661 | orchestrator | Thursday 19 June 2025 10:33:57 +0000 (0:00:00.277) 0:00:19.657 ********* 2025-06-19 10:35:45.788670 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.788680 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.788689 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.788699 | orchestrator | 2025-06-19 10:35:45.788708 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-19 10:35:45.788717 | orchestrator | Thursday 19 June 2025 10:33:57 +0000 (0:00:00.398) 0:00:20.056 ********* 2025-06-19 10:35:45.788727 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.788736 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.788746 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.788755 | orchestrator | 2025-06-19 10:35:45.788764 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-19 10:35:45.788774 | orchestrator | Thursday 19 June 2025 10:33:58 +0000 (0:00:00.482) 0:00:20.539 ********* 2025-06-19 10:35:45.788784 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-19 10:35:45.788793 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-19 10:35:45.788802 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-19 10:35:45.788812 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-19 10:35:45.788821 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-19 10:35:45.788830 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-19 10:35:45.788840 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-19 10:35:45.788849 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-19 10:35:45.788858 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-19 10:35:45.788868 | orchestrator | 2025-06-19 10:35:45.788877 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-19 10:35:45.788887 | orchestrator | Thursday 19 June 2025 10:33:59 +0000 (0:00:00.817) 0:00:21.357 ********* 2025-06-19 10:35:45.788896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-19 10:35:45.788906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-19 10:35:45.788915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-19 10:35:45.788924 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.788940 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-19 10:35:45.788950 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-19 10:35:45.788959 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-19 10:35:45.788968 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.788978 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-19 10:35:45.788987 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-19 10:35:45.788996 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-19 10:35:45.789006 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.789015 | orchestrator | 2025-06-19 10:35:45.789024 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-19 10:35:45.789034 | orchestrator | Thursday 19 June 2025 10:33:59 +0000 (0:00:00.338) 0:00:21.695 ********* 2025-06-19 10:35:45.789044 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:35:45.789053 | orchestrator | 2025-06-19 10:35:45.789063 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-19 10:35:45.789073 | orchestrator | Thursday 19 June 2025 10:34:00 +0000 (0:00:00.699) 0:00:22.395 ********* 2025-06-19 10:35:45.789083 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.789139 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.789150 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.789159 | orchestrator | 2025-06-19 10:35:45.789174 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-19 10:35:45.789184 | orchestrator | Thursday 19 June 2025 10:34:00 +0000 (0:00:00.314) 0:00:22.709 ********* 2025-06-19 10:35:45.789194 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.789203 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.789212 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.789222 | orchestrator | 2025-06-19 10:35:45.789231 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-19 10:35:45.789241 | orchestrator | Thursday 19 June 2025 10:34:00 +0000 (0:00:00.306) 0:00:23.016 ********* 2025-06-19 10:35:45.789255 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.789264 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.789273 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:35:45.789283 | orchestrator | 2025-06-19 10:35:45.789292 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-19 10:35:45.789302 | orchestrator | Thursday 19 June 2025 10:34:00 +0000 (0:00:00.301) 0:00:23.318 ********* 2025-06-19 10:35:45.789311 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.789321 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.789330 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.789339 | orchestrator | 2025-06-19 10:35:45.789349 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-19 10:35:45.789358 | orchestrator | Thursday 19 June 2025 10:34:01 +0000 (0:00:00.584) 0:00:23.903 ********* 2025-06-19 10:35:45.789368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:35:45.789377 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:35:45.789386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:35:45.789396 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.789405 | orchestrator | 2025-06-19 10:35:45.789414 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-19 10:35:45.789424 | orchestrator | Thursday 19 June 2025 10:34:01 +0000 (0:00:00.379) 0:00:24.282 ********* 2025-06-19 10:35:45.789433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:35:45.789442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:35:45.789452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:35:45.789461 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.789477 | orchestrator | 2025-06-19 10:35:45.789486 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-19 10:35:45.789495 | orchestrator | Thursday 19 June 2025 10:34:02 +0000 (0:00:00.376) 0:00:24.658 ********* 2025-06-19 10:35:45.789505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-19 10:35:45.789514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-19 10:35:45.789523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-19 10:35:45.789533 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.789542 | orchestrator | 2025-06-19 10:35:45.789552 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-19 10:35:45.789561 | orchestrator | Thursday 19 June 2025 10:34:02 +0000 (0:00:00.356) 0:00:25.015 ********* 2025-06-19 10:35:45.789571 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:35:45.789580 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:35:45.789590 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:35:45.789599 | orchestrator | 2025-06-19 10:35:45.789609 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-19 10:35:45.789618 | orchestrator | Thursday 19 June 2025 10:34:02 +0000 (0:00:00.323) 0:00:25.338 ********* 2025-06-19 10:35:45.789628 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-19 10:35:45.789637 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-19 10:35:45.789646 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-19 10:35:45.789656 | orchestrator | 2025-06-19 10:35:45.789663 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-19 10:35:45.789671 | orchestrator | Thursday 19 June 2025 10:34:03 +0000 (0:00:00.483) 0:00:25.823 ********* 2025-06-19 10:35:45.789679 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-19 10:35:45.789686 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-19 10:35:45.789694 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-19 10:35:45.789702 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-19 10:35:45.789710 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-19 10:35:45.789717 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-19 10:35:45.789725 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-19 10:35:45.789733 | orchestrator | 2025-06-19 10:35:45.789740 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-19 10:35:45.789748 | orchestrator | Thursday 19 June 2025 10:34:04 +0000 (0:00:00.977) 0:00:26.800 ********* 2025-06-19 10:35:45.789756 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-19 10:35:45.789764 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-19 10:35:45.789771 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-19 10:35:45.789779 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-19 10:35:45.789787 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-19 10:35:45.789794 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-19 10:35:45.789802 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-19 10:35:45.789810 | orchestrator | 2025-06-19 10:35:45.789822 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-19 10:35:45.789830 | orchestrator | Thursday 19 June 2025 10:34:06 +0000 (0:00:01.984) 0:00:28.785 ********* 2025-06-19 10:35:45.789837 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:35:45.789845 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:35:45.789853 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-19 10:35:45.789868 | orchestrator | 2025-06-19 10:35:45.789876 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-19 10:35:45.789883 | orchestrator | Thursday 19 June 2025 10:34:06 +0000 (0:00:00.377) 0:00:29.163 ********* 2025-06-19 10:35:45.789895 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-19 10:35:45.789904 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-19 10:35:45.789912 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-19 10:35:45.789920 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-19 10:35:45.789928 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-19 10:35:45.789936 | orchestrator | 2025-06-19 10:35:45.789944 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-19 10:35:45.789952 | orchestrator | Thursday 19 June 2025 10:34:50 +0000 (0:00:43.792) 0:01:12.955 ********* 2025-06-19 10:35:45.789960 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.789967 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.789975 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.789983 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.789990 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.789998 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790006 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-19 10:35:45.790013 | orchestrator | 2025-06-19 10:35:45.790067 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-19 10:35:45.790075 | orchestrator | Thursday 19 June 2025 10:35:14 +0000 (0:00:23.883) 0:01:36.839 ********* 2025-06-19 10:35:45.790082 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790105 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790113 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790121 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790129 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790137 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790144 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-19 10:35:45.790152 | orchestrator | 2025-06-19 10:35:45.790160 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-19 10:35:45.790174 | orchestrator | Thursday 19 June 2025 10:35:26 +0000 (0:00:11.688) 0:01:48.527 ********* 2025-06-19 10:35:45.790182 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790190 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-19 10:35:45.790198 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-19 10:35:45.790205 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790213 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-19 10:35:45.790221 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-19 10:35:45.790235 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790243 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-19 10:35:45.790251 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-19 10:35:45.790258 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790266 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-19 10:35:45.790278 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-19 10:35:45.790286 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790294 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-19 10:35:45.790302 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-19 10:35:45.790310 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-19 10:35:45.790318 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-19 10:35:45.790326 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-19 10:35:45.790334 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-19 10:35:45.790342 | orchestrator | 2025-06-19 10:35:45.790349 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:35:45.790357 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-19 10:35:45.790366 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-19 10:35:45.790374 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-19 10:35:45.790382 | orchestrator | 2025-06-19 10:35:45.790390 | orchestrator | 2025-06-19 10:35:45.790398 | orchestrator | 2025-06-19 10:35:45.790405 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:35:45.790413 | orchestrator | Thursday 19 June 2025 10:35:43 +0000 (0:00:17.105) 0:02:05.632 ********* 2025-06-19 10:35:45.790421 | orchestrator | =============================================================================== 2025-06-19 10:35:45.790428 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.79s 2025-06-19 10:35:45.790436 | orchestrator | generate keys ---------------------------------------------------------- 23.88s 2025-06-19 10:35:45.790444 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.11s 2025-06-19 10:35:45.790452 | orchestrator | get keys from monitors ------------------------------------------------- 11.69s 2025-06-19 10:35:45.790459 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.75s 2025-06-19 10:35:45.790467 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.11s 2025-06-19 10:35:45.790475 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.98s 2025-06-19 10:35:45.790488 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.98s 2025-06-19 10:35:45.790496 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.82s 2025-06-19 10:35:45.790504 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.80s 2025-06-19 10:35:45.790512 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2025-06-19 10:35:45.790520 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-06-19 10:35:45.790527 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2025-06-19 10:35:45.790535 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.66s 2025-06-19 10:35:45.790543 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.61s 2025-06-19 10:35:45.790550 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.60s 2025-06-19 10:35:45.790558 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2025-06-19 10:35:45.790566 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.58s 2025-06-19 10:35:45.790574 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.56s 2025-06-19 10:35:45.790582 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.56s 2025-06-19 10:35:45.790590 | orchestrator | 2025-06-19 10:35:45 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:48.830842 | orchestrator | 2025-06-19 10:35:48 | INFO  | Task fa1324ed-f000-41a3-bfae-80a02b92beff is in state STARTED 2025-06-19 10:35:48.832704 | orchestrator | 2025-06-19 10:35:48 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:48.834305 | orchestrator | 2025-06-19 10:35:48 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:48.834337 | orchestrator | 2025-06-19 10:35:48 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:51.877757 | orchestrator | 2025-06-19 10:35:51 | INFO  | Task fa1324ed-f000-41a3-bfae-80a02b92beff is in state STARTED 2025-06-19 10:35:51.877864 | orchestrator | 2025-06-19 10:35:51 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:51.879063 | orchestrator | 2025-06-19 10:35:51 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:51.879436 | orchestrator | 2025-06-19 10:35:51 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:54.917720 | orchestrator | 2025-06-19 10:35:54 | INFO  | Task fa1324ed-f000-41a3-bfae-80a02b92beff is in state STARTED 2025-06-19 10:35:54.917967 | orchestrator | 2025-06-19 10:35:54 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:54.919913 | orchestrator | 2025-06-19 10:35:54 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:54.919944 | orchestrator | 2025-06-19 10:35:54 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:35:57.980986 | orchestrator | 2025-06-19 10:35:57 | INFO  | Task fa1324ed-f000-41a3-bfae-80a02b92beff is in state STARTED 2025-06-19 10:35:57.983484 | orchestrator | 2025-06-19 10:35:57 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:35:57.985138 | orchestrator | 2025-06-19 10:35:57 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:35:57.985257 | orchestrator | 2025-06-19 10:35:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:01.036850 | orchestrator | 2025-06-19 10:36:01 | INFO  | Task fa1324ed-f000-41a3-bfae-80a02b92beff is in state STARTED 2025-06-19 10:36:01.044573 | orchestrator | 2025-06-19 10:36:01 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:01.044688 | orchestrator | 2025-06-19 10:36:01 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:01.044705 | orchestrator | 2025-06-19 10:36:01 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:04.101823 | orchestrator | 2025-06-19 10:36:04 | INFO  | Task fa1324ed-f000-41a3-bfae-80a02b92beff is in state STARTED 2025-06-19 10:36:04.103818 | orchestrator | 2025-06-19 10:36:04 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:04.108162 | orchestrator | 2025-06-19 10:36:04 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:04.108208 | orchestrator | 2025-06-19 10:36:04 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:07.150246 | orchestrator | 2025-06-19 10:36:07 | INFO  | Task fa1324ed-f000-41a3-bfae-80a02b92beff is in state STARTED 2025-06-19 10:36:07.150842 | orchestrator | 2025-06-19 10:36:07 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:07.152291 | orchestrator | 2025-06-19 10:36:07 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:07.152404 | orchestrator | 2025-06-19 10:36:07 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:10.189926 | orchestrator | 2025-06-19 10:36:10 | INFO  | Task fa1324ed-f000-41a3-bfae-80a02b92beff is in state STARTED 2025-06-19 10:36:10.190109 | orchestrator | 2025-06-19 10:36:10 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:10.190127 | orchestrator | 2025-06-19 10:36:10 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:10.190139 | orchestrator | 2025-06-19 10:36:10 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:13.234330 | orchestrator | 2025-06-19 10:36:13 | INFO  | Task fa1324ed-f000-41a3-bfae-80a02b92beff is in state SUCCESS 2025-06-19 10:36:13.238168 | orchestrator | 2025-06-19 10:36:13 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:13.240227 | orchestrator | 2025-06-19 10:36:13 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:13.242354 | orchestrator | 2025-06-19 10:36:13 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:13.242381 | orchestrator | 2025-06-19 10:36:13 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:16.301627 | orchestrator | 2025-06-19 10:36:16 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:16.304790 | orchestrator | 2025-06-19 10:36:16 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:16.306846 | orchestrator | 2025-06-19 10:36:16 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:16.306875 | orchestrator | 2025-06-19 10:36:16 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:19.365854 | orchestrator | 2025-06-19 10:36:19 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:19.367984 | orchestrator | 2025-06-19 10:36:19 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:19.369552 | orchestrator | 2025-06-19 10:36:19 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:19.369595 | orchestrator | 2025-06-19 10:36:19 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:22.410294 | orchestrator | 2025-06-19 10:36:22 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:22.411610 | orchestrator | 2025-06-19 10:36:22 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:22.413446 | orchestrator | 2025-06-19 10:36:22 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:22.413473 | orchestrator | 2025-06-19 10:36:22 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:25.453681 | orchestrator | 2025-06-19 10:36:25 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:25.454849 | orchestrator | 2025-06-19 10:36:25 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:25.456389 | orchestrator | 2025-06-19 10:36:25 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:25.456415 | orchestrator | 2025-06-19 10:36:25 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:28.493967 | orchestrator | 2025-06-19 10:36:28 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:28.495400 | orchestrator | 2025-06-19 10:36:28 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:28.496371 | orchestrator | 2025-06-19 10:36:28 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:28.496510 | orchestrator | 2025-06-19 10:36:28 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:31.533065 | orchestrator | 2025-06-19 10:36:31 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:31.534614 | orchestrator | 2025-06-19 10:36:31 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:31.537389 | orchestrator | 2025-06-19 10:36:31 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:31.537414 | orchestrator | 2025-06-19 10:36:31 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:34.573618 | orchestrator | 2025-06-19 10:36:34 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:34.573685 | orchestrator | 2025-06-19 10:36:34 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:34.575878 | orchestrator | 2025-06-19 10:36:34 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:34.575895 | orchestrator | 2025-06-19 10:36:34 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:37.612348 | orchestrator | 2025-06-19 10:36:37 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:37.612458 | orchestrator | 2025-06-19 10:36:37 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:37.613341 | orchestrator | 2025-06-19 10:36:37 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:37.613366 | orchestrator | 2025-06-19 10:36:37 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:40.658122 | orchestrator | 2025-06-19 10:36:40 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:40.659962 | orchestrator | 2025-06-19 10:36:40 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:40.661332 | orchestrator | 2025-06-19 10:36:40 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:40.661505 | orchestrator | 2025-06-19 10:36:40 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:43.705837 | orchestrator | 2025-06-19 10:36:43 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:43.708135 | orchestrator | 2025-06-19 10:36:43 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:43.709931 | orchestrator | 2025-06-19 10:36:43 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:43.709999 | orchestrator | 2025-06-19 10:36:43 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:46.754616 | orchestrator | 2025-06-19 10:36:46 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:46.756186 | orchestrator | 2025-06-19 10:36:46 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:46.758413 | orchestrator | 2025-06-19 10:36:46 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:46.758609 | orchestrator | 2025-06-19 10:36:46 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:49.803273 | orchestrator | 2025-06-19 10:36:49 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:49.805146 | orchestrator | 2025-06-19 10:36:49 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:49.806598 | orchestrator | 2025-06-19 10:36:49 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:49.806836 | orchestrator | 2025-06-19 10:36:49 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:52.854441 | orchestrator | 2025-06-19 10:36:52 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:52.855831 | orchestrator | 2025-06-19 10:36:52 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:52.857694 | orchestrator | 2025-06-19 10:36:52 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:52.857789 | orchestrator | 2025-06-19 10:36:52 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:55.904177 | orchestrator | 2025-06-19 10:36:55 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:55.906202 | orchestrator | 2025-06-19 10:36:55 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:55.907783 | orchestrator | 2025-06-19 10:36:55 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:55.907815 | orchestrator | 2025-06-19 10:36:55 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:36:58.947880 | orchestrator | 2025-06-19 10:36:58 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:36:58.948047 | orchestrator | 2025-06-19 10:36:58 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:36:58.949344 | orchestrator | 2025-06-19 10:36:58 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:36:58.949374 | orchestrator | 2025-06-19 10:36:58 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:02.002325 | orchestrator | 2025-06-19 10:37:01 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:37:02.003452 | orchestrator | 2025-06-19 10:37:02 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:02.004589 | orchestrator | 2025-06-19 10:37:02 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:37:02.004628 | orchestrator | 2025-06-19 10:37:02 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:05.039855 | orchestrator | 2025-06-19 10:37:05 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state STARTED 2025-06-19 10:37:05.040559 | orchestrator | 2025-06-19 10:37:05 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:05.041184 | orchestrator | 2025-06-19 10:37:05 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state STARTED 2025-06-19 10:37:05.041389 | orchestrator | 2025-06-19 10:37:05 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:08.096909 | orchestrator | 2025-06-19 10:37:08.097056 | orchestrator | 2025-06-19 10:37:08.097073 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-19 10:37:08.097085 | orchestrator | 2025-06-19 10:37:08.097097 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-19 10:37:08.097109 | orchestrator | Thursday 19 June 2025 10:35:47 +0000 (0:00:00.154) 0:00:00.154 ********* 2025-06-19 10:37:08.097120 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-19 10:37:08.097132 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-19 10:37:08.097143 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-19 10:37:08.097154 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-19 10:37:08.097165 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-19 10:37:08.097176 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-19 10:37:08.097186 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-19 10:37:08.097197 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-19 10:37:08.097208 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-19 10:37:08.097219 | orchestrator | 2025-06-19 10:37:08.097230 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-19 10:37:08.097257 | orchestrator | Thursday 19 June 2025 10:35:51 +0000 (0:00:04.171) 0:00:04.325 ********* 2025-06-19 10:37:08.097269 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-19 10:37:08.097281 | orchestrator | 2025-06-19 10:37:08.097292 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-19 10:37:08.097302 | orchestrator | Thursday 19 June 2025 10:35:52 +0000 (0:00:00.981) 0:00:05.307 ********* 2025-06-19 10:37:08.097313 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-19 10:37:08.097325 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-19 10:37:08.097336 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-19 10:37:08.097615 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-19 10:37:08.097635 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-19 10:37:08.097649 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-19 10:37:08.097668 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-19 10:37:08.097687 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-19 10:37:08.097705 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-19 10:37:08.097725 | orchestrator | 2025-06-19 10:37:08.097746 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-19 10:37:08.097761 | orchestrator | Thursday 19 June 2025 10:36:05 +0000 (0:00:12.957) 0:00:18.264 ********* 2025-06-19 10:37:08.097776 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-19 10:37:08.097789 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-19 10:37:08.097802 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-19 10:37:08.097813 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-19 10:37:08.097846 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-19 10:37:08.097858 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-19 10:37:08.097869 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-19 10:37:08.097879 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-19 10:37:08.097890 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-19 10:37:08.097901 | orchestrator | 2025-06-19 10:37:08.097911 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:37:08.097971 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:37:08.097984 | orchestrator | 2025-06-19 10:37:08.097995 | orchestrator | 2025-06-19 10:37:08.098006 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:37:08.098074 | orchestrator | Thursday 19 June 2025 10:36:11 +0000 (0:00:05.835) 0:00:24.100 ********* 2025-06-19 10:37:08.098087 | orchestrator | =============================================================================== 2025-06-19 10:37:08.098098 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.96s 2025-06-19 10:37:08.098109 | orchestrator | Write ceph keys to the configuration directory -------------------------- 5.84s 2025-06-19 10:37:08.098120 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.17s 2025-06-19 10:37:08.098130 | orchestrator | Create share directory -------------------------------------------------- 0.98s 2025-06-19 10:37:08.098141 | orchestrator | 2025-06-19 10:37:08.098152 | orchestrator | 2025-06-19 10:37:08.098562 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-19 10:37:08.098576 | orchestrator | 2025-06-19 10:37:08.098604 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-19 10:37:08.098616 | orchestrator | Thursday 19 June 2025 10:36:17 +0000 (0:00:00.230) 0:00:00.231 ********* 2025-06-19 10:37:08.098627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-19 10:37:08.098638 | orchestrator | 2025-06-19 10:37:08.098649 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-19 10:37:08.098660 | orchestrator | Thursday 19 June 2025 10:36:17 +0000 (0:00:00.240) 0:00:00.471 ********* 2025-06-19 10:37:08.098671 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-19 10:37:08.098682 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-19 10:37:08.098693 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-19 10:37:08.098706 | orchestrator | 2025-06-19 10:37:08.098724 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-19 10:37:08.098743 | orchestrator | Thursday 19 June 2025 10:36:18 +0000 (0:00:01.232) 0:00:01.703 ********* 2025-06-19 10:37:08.098762 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-19 10:37:08.098782 | orchestrator | 2025-06-19 10:37:08.098801 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-19 10:37:08.098820 | orchestrator | Thursday 19 June 2025 10:36:19 +0000 (0:00:01.153) 0:00:02.857 ********* 2025-06-19 10:37:08.098839 | orchestrator | changed: [testbed-manager] 2025-06-19 10:37:08.098851 | orchestrator | 2025-06-19 10:37:08.098862 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-19 10:37:08.098873 | orchestrator | Thursday 19 June 2025 10:36:20 +0000 (0:00:00.979) 0:00:03.837 ********* 2025-06-19 10:37:08.098893 | orchestrator | changed: [testbed-manager] 2025-06-19 10:37:08.098904 | orchestrator | 2025-06-19 10:37:08.098941 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-19 10:37:08.098954 | orchestrator | Thursday 19 June 2025 10:36:21 +0000 (0:00:00.872) 0:00:04.709 ********* 2025-06-19 10:37:08.098977 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-19 10:37:08.098988 | orchestrator | ok: [testbed-manager] 2025-06-19 10:37:08.098999 | orchestrator | 2025-06-19 10:37:08.099010 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-19 10:37:08.099021 | orchestrator | Thursday 19 June 2025 10:36:57 +0000 (0:00:35.780) 0:00:40.490 ********* 2025-06-19 10:37:08.099032 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-19 10:37:08.099043 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-19 10:37:08.099054 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-19 10:37:08.099188 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-19 10:37:08.099200 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-19 10:37:08.099211 | orchestrator | 2025-06-19 10:37:08.099222 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-19 10:37:08.099233 | orchestrator | Thursday 19 June 2025 10:37:01 +0000 (0:00:03.725) 0:00:44.215 ********* 2025-06-19 10:37:08.099244 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-19 10:37:08.099254 | orchestrator | 2025-06-19 10:37:08.099265 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-19 10:37:08.099276 | orchestrator | Thursday 19 June 2025 10:37:01 +0000 (0:00:00.464) 0:00:44.679 ********* 2025-06-19 10:37:08.099287 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:37:08.099297 | orchestrator | 2025-06-19 10:37:08.099308 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-19 10:37:08.099319 | orchestrator | Thursday 19 June 2025 10:37:01 +0000 (0:00:00.129) 0:00:44.808 ********* 2025-06-19 10:37:08.099330 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:37:08.099340 | orchestrator | 2025-06-19 10:37:08.099351 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-19 10:37:08.099362 | orchestrator | Thursday 19 June 2025 10:37:02 +0000 (0:00:00.293) 0:00:45.102 ********* 2025-06-19 10:37:08.099373 | orchestrator | changed: [testbed-manager] 2025-06-19 10:37:08.099383 | orchestrator | 2025-06-19 10:37:08.099394 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-19 10:37:08.099405 | orchestrator | Thursday 19 June 2025 10:37:03 +0000 (0:00:01.629) 0:00:46.731 ********* 2025-06-19 10:37:08.099416 | orchestrator | changed: [testbed-manager] 2025-06-19 10:37:08.099427 | orchestrator | 2025-06-19 10:37:08.099437 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-19 10:37:08.099448 | orchestrator | Thursday 19 June 2025 10:37:04 +0000 (0:00:00.694) 0:00:47.426 ********* 2025-06-19 10:37:08.099459 | orchestrator | changed: [testbed-manager] 2025-06-19 10:37:08.099470 | orchestrator | 2025-06-19 10:37:08.099480 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-19 10:37:08.099491 | orchestrator | Thursday 19 June 2025 10:37:05 +0000 (0:00:00.601) 0:00:48.027 ********* 2025-06-19 10:37:08.099502 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-19 10:37:08.099513 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-19 10:37:08.099523 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-19 10:37:08.099534 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-19 10:37:08.099545 | orchestrator | 2025-06-19 10:37:08.099555 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:37:08.099567 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:37:08.099578 | orchestrator | 2025-06-19 10:37:08.099589 | orchestrator | 2025-06-19 10:37:08.099599 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:37:08.099610 | orchestrator | Thursday 19 June 2025 10:37:06 +0000 (0:00:01.380) 0:00:49.408 ********* 2025-06-19 10:37:08.099633 | orchestrator | =============================================================================== 2025-06-19 10:37:08.099653 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.78s 2025-06-19 10:37:08.099664 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.73s 2025-06-19 10:37:08.099675 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.63s 2025-06-19 10:37:08.099686 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.38s 2025-06-19 10:37:08.099696 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.23s 2025-06-19 10:37:08.099707 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.15s 2025-06-19 10:37:08.099718 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.98s 2025-06-19 10:37:08.099728 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.87s 2025-06-19 10:37:08.099739 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2025-06-19 10:37:08.099750 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2025-06-19 10:37:08.099767 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2025-06-19 10:37:08.099788 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-06-19 10:37:08.099810 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2025-06-19 10:37:08.099833 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-06-19 10:37:08.099846 | orchestrator | 2025-06-19 10:37:08.099859 | orchestrator | 2025-06-19 10:37:08 | INFO  | Task c654c74a-c259-41d6-8c4b-265db07b108f is in state SUCCESS 2025-06-19 10:37:08.099879 | orchestrator | 2025-06-19 10:37:08 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:08.099892 | orchestrator | 2025-06-19 10:37:08 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:08.099906 | orchestrator | 2025-06-19 10:37:08 | INFO  | Task 795e2426-e2a6-4c14-9c8b-39f11ead4041 is in state SUCCESS 2025-06-19 10:37:08.099944 | orchestrator | 2025-06-19 10:37:08.099958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:37:08.099971 | orchestrator | 2025-06-19 10:37:08.099982 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:37:08.099993 | orchestrator | Thursday 19 June 2025 10:35:20 +0000 (0:00:00.263) 0:00:00.263 ********* 2025-06-19 10:37:08.100004 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.100015 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.100026 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.100037 | orchestrator | 2025-06-19 10:37:08.100048 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:37:08.100059 | orchestrator | Thursday 19 June 2025 10:35:20 +0000 (0:00:00.292) 0:00:00.556 ********* 2025-06-19 10:37:08.100071 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-19 10:37:08.100088 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-19 10:37:08.100107 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-19 10:37:08.100124 | orchestrator | 2025-06-19 10:37:08.100142 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-19 10:37:08.100159 | orchestrator | 2025-06-19 10:37:08.100170 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-19 10:37:08.100181 | orchestrator | Thursday 19 June 2025 10:35:21 +0000 (0:00:00.413) 0:00:00.969 ********* 2025-06-19 10:37:08.100192 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:37:08.100204 | orchestrator | 2025-06-19 10:37:08.100215 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-19 10:37:08.100226 | orchestrator | Thursday 19 June 2025 10:35:21 +0000 (0:00:00.515) 0:00:01.485 ********* 2025-06-19 10:37:08.100256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:37:08.100289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:37:08.100319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:37:08.100331 | orchestrator | 2025-06-19 10:37:08.100343 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-19 10:37:08.100354 | orchestrator | Thursday 19 June 2025 10:35:22 +0000 (0:00:01.098) 0:00:02.584 ********* 2025-06-19 10:37:08.100370 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.100381 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.100392 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.100402 | orchestrator | 2025-06-19 10:37:08.100413 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-19 10:37:08.100424 | orchestrator | Thursday 19 June 2025 10:35:23 +0000 (0:00:00.420) 0:00:03.004 ********* 2025-06-19 10:37:08.100435 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-19 10:37:08.100446 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-19 10:37:08.100457 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-19 10:37:08.100467 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-19 10:37:08.100478 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-19 10:37:08.100489 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-19 10:37:08.100499 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-19 10:37:08.100510 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-19 10:37:08.100521 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-19 10:37:08.100538 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-19 10:37:08.100549 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-19 10:37:08.100560 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-19 10:37:08.100570 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-19 10:37:08.100581 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-19 10:37:08.100591 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-19 10:37:08.100602 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-19 10:37:08.100613 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-19 10:37:08.100623 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-19 10:37:08.100634 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-19 10:37:08.100644 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-19 10:37:08.100655 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-19 10:37:08.100665 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-19 10:37:08.100676 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-19 10:37:08.100686 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-19 10:37:08.100698 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-19 10:37:08.100709 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-19 10:37:08.100726 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-19 10:37:08.100737 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-19 10:37:08.100748 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-19 10:37:08.100758 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-19 10:37:08.100769 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-19 10:37:08.100779 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-19 10:37:08.100790 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-19 10:37:08.100807 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-19 10:37:08.100826 | orchestrator | 2025-06-19 10:37:08.100845 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-19 10:37:08.100871 | orchestrator | Thursday 19 June 2025 10:35:24 +0000 (0:00:00.738) 0:00:03.742 ********* 2025-06-19 10:37:08.100891 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.100910 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.100951 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.100970 | orchestrator | 2025-06-19 10:37:08.100981 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-19 10:37:08.100992 | orchestrator | Thursday 19 June 2025 10:35:24 +0000 (0:00:00.303) 0:00:04.046 ********* 2025-06-19 10:37:08.101003 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101014 | orchestrator | 2025-06-19 10:37:08.101025 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-19 10:37:08.101036 | orchestrator | Thursday 19 June 2025 10:35:24 +0000 (0:00:00.140) 0:00:04.187 ********* 2025-06-19 10:37:08.101047 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101057 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.101068 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.101079 | orchestrator | 2025-06-19 10:37:08.101089 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-19 10:37:08.101100 | orchestrator | Thursday 19 June 2025 10:35:24 +0000 (0:00:00.452) 0:00:04.640 ********* 2025-06-19 10:37:08.101111 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.101122 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.101133 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.101143 | orchestrator | 2025-06-19 10:37:08.101154 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-19 10:37:08.101165 | orchestrator | Thursday 19 June 2025 10:35:25 +0000 (0:00:00.320) 0:00:04.960 ********* 2025-06-19 10:37:08.101176 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101186 | orchestrator | 2025-06-19 10:37:08.101197 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-19 10:37:08.101208 | orchestrator | Thursday 19 June 2025 10:35:25 +0000 (0:00:00.127) 0:00:05.088 ********* 2025-06-19 10:37:08.101218 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101229 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.101240 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.101251 | orchestrator | 2025-06-19 10:37:08.101262 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-19 10:37:08.101272 | orchestrator | Thursday 19 June 2025 10:35:25 +0000 (0:00:00.315) 0:00:05.404 ********* 2025-06-19 10:37:08.101283 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.101294 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.101304 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.101315 | orchestrator | 2025-06-19 10:37:08.101326 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-19 10:37:08.101336 | orchestrator | Thursday 19 June 2025 10:35:26 +0000 (0:00:00.290) 0:00:05.695 ********* 2025-06-19 10:37:08.101347 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101358 | orchestrator | 2025-06-19 10:37:08.101369 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-19 10:37:08.101380 | orchestrator | Thursday 19 June 2025 10:35:26 +0000 (0:00:00.129) 0:00:05.824 ********* 2025-06-19 10:37:08.101390 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101401 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.101411 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.101422 | orchestrator | 2025-06-19 10:37:08.101433 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-19 10:37:08.101444 | orchestrator | Thursday 19 June 2025 10:35:26 +0000 (0:00:00.498) 0:00:06.322 ********* 2025-06-19 10:37:08.101455 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.101465 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.101476 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.101487 | orchestrator | 2025-06-19 10:37:08.101498 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-19 10:37:08.101508 | orchestrator | Thursday 19 June 2025 10:35:26 +0000 (0:00:00.326) 0:00:06.649 ********* 2025-06-19 10:37:08.101519 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101530 | orchestrator | 2025-06-19 10:37:08.101540 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-19 10:37:08.101551 | orchestrator | Thursday 19 June 2025 10:35:27 +0000 (0:00:00.134) 0:00:06.783 ********* 2025-06-19 10:37:08.101569 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101580 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.101597 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.101608 | orchestrator | 2025-06-19 10:37:08.101619 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-19 10:37:08.101630 | orchestrator | Thursday 19 June 2025 10:35:27 +0000 (0:00:00.281) 0:00:07.065 ********* 2025-06-19 10:37:08.101641 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.101652 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.101662 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.101673 | orchestrator | 2025-06-19 10:37:08.101684 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-19 10:37:08.101694 | orchestrator | Thursday 19 June 2025 10:35:27 +0000 (0:00:00.475) 0:00:07.541 ********* 2025-06-19 10:37:08.101705 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101716 | orchestrator | 2025-06-19 10:37:08.101726 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-19 10:37:08.101737 | orchestrator | Thursday 19 June 2025 10:35:27 +0000 (0:00:00.140) 0:00:07.681 ********* 2025-06-19 10:37:08.101748 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101758 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.101769 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.101780 | orchestrator | 2025-06-19 10:37:08.101790 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-19 10:37:08.101801 | orchestrator | Thursday 19 June 2025 10:35:28 +0000 (0:00:00.279) 0:00:07.960 ********* 2025-06-19 10:37:08.101812 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.101823 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.101833 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.101845 | orchestrator | 2025-06-19 10:37:08.101864 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-19 10:37:08.101883 | orchestrator | Thursday 19 June 2025 10:35:28 +0000 (0:00:00.309) 0:00:08.270 ********* 2025-06-19 10:37:08.101904 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.101954 | orchestrator | 2025-06-19 10:37:08.101969 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-19 10:37:08.101991 | orchestrator | Thursday 19 June 2025 10:35:28 +0000 (0:00:00.144) 0:00:08.414 ********* 2025-06-19 10:37:08.102003 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.102013 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.102059 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.102070 | orchestrator | 2025-06-19 10:37:08.102080 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-19 10:37:08.102091 | orchestrator | Thursday 19 June 2025 10:35:28 +0000 (0:00:00.270) 0:00:08.685 ********* 2025-06-19 10:37:08.102102 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.102113 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.102123 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.102134 | orchestrator | 2025-06-19 10:37:08.102144 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-19 10:37:08.102155 | orchestrator | Thursday 19 June 2025 10:35:29 +0000 (0:00:00.468) 0:00:09.153 ********* 2025-06-19 10:37:08.102166 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.102176 | orchestrator | 2025-06-19 10:37:08.102187 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-19 10:37:08.102198 | orchestrator | Thursday 19 June 2025 10:35:29 +0000 (0:00:00.128) 0:00:09.282 ********* 2025-06-19 10:37:08.102209 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.102220 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.102230 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.102241 | orchestrator | 2025-06-19 10:37:08.102252 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-19 10:37:08.102263 | orchestrator | Thursday 19 June 2025 10:35:29 +0000 (0:00:00.299) 0:00:09.581 ********* 2025-06-19 10:37:08.102281 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.102292 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.102303 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.102314 | orchestrator | 2025-06-19 10:37:08.102325 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-19 10:37:08.102335 | orchestrator | Thursday 19 June 2025 10:35:30 +0000 (0:00:00.296) 0:00:09.878 ********* 2025-06-19 10:37:08.102346 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.102357 | orchestrator | 2025-06-19 10:37:08.102368 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-19 10:37:08.102379 | orchestrator | Thursday 19 June 2025 10:35:30 +0000 (0:00:00.132) 0:00:10.011 ********* 2025-06-19 10:37:08.102389 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.102400 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.102411 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.102421 | orchestrator | 2025-06-19 10:37:08.102432 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-19 10:37:08.102443 | orchestrator | Thursday 19 June 2025 10:35:30 +0000 (0:00:00.539) 0:00:10.550 ********* 2025-06-19 10:37:08.102454 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.102464 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.102475 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.102485 | orchestrator | 2025-06-19 10:37:08.102496 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-19 10:37:08.102507 | orchestrator | Thursday 19 June 2025 10:35:31 +0000 (0:00:00.320) 0:00:10.871 ********* 2025-06-19 10:37:08.102518 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.102528 | orchestrator | 2025-06-19 10:37:08.102539 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-19 10:37:08.102550 | orchestrator | Thursday 19 June 2025 10:35:31 +0000 (0:00:00.125) 0:00:10.997 ********* 2025-06-19 10:37:08.102561 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.102571 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.102582 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.102593 | orchestrator | 2025-06-19 10:37:08.102604 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-19 10:37:08.102614 | orchestrator | Thursday 19 June 2025 10:35:31 +0000 (0:00:00.328) 0:00:11.325 ********* 2025-06-19 10:37:08.102625 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:37:08.102636 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:37:08.102647 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:37:08.102657 | orchestrator | 2025-06-19 10:37:08.102668 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-19 10:37:08.102687 | orchestrator | Thursday 19 June 2025 10:35:31 +0000 (0:00:00.300) 0:00:11.626 ********* 2025-06-19 10:37:08.102698 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.102709 | orchestrator | 2025-06-19 10:37:08.102720 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-19 10:37:08.102731 | orchestrator | Thursday 19 June 2025 10:35:32 +0000 (0:00:00.136) 0:00:11.763 ********* 2025-06-19 10:37:08.102741 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.102752 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.102763 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.102773 | orchestrator | 2025-06-19 10:37:08.102784 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-19 10:37:08.102795 | orchestrator | Thursday 19 June 2025 10:35:32 +0000 (0:00:00.509) 0:00:12.273 ********* 2025-06-19 10:37:08.102806 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:37:08.102816 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:37:08.102827 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:37:08.102838 | orchestrator | 2025-06-19 10:37:08.102848 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-19 10:37:08.102859 | orchestrator | Thursday 19 June 2025 10:35:34 +0000 (0:00:01.723) 0:00:13.997 ********* 2025-06-19 10:37:08.102877 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-19 10:37:08.102896 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-19 10:37:08.102978 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-19 10:37:08.103002 | orchestrator | 2025-06-19 10:37:08.103020 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-19 10:37:08.103039 | orchestrator | Thursday 19 June 2025 10:35:35 +0000 (0:00:01.661) 0:00:15.658 ********* 2025-06-19 10:37:08.103057 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-19 10:37:08.103068 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-19 10:37:08.103079 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-19 10:37:08.103089 | orchestrator | 2025-06-19 10:37:08.103100 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-19 10:37:08.103110 | orchestrator | Thursday 19 June 2025 10:35:38 +0000 (0:00:02.292) 0:00:17.950 ********* 2025-06-19 10:37:08.103121 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-19 10:37:08.103132 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-19 10:37:08.103143 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-19 10:37:08.103154 | orchestrator | 2025-06-19 10:37:08.103164 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-19 10:37:08.103175 | orchestrator | Thursday 19 June 2025 10:35:40 +0000 (0:00:02.014) 0:00:19.965 ********* 2025-06-19 10:37:08.103186 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.103196 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.103207 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.103218 | orchestrator | 2025-06-19 10:37:08.103228 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-19 10:37:08.103239 | orchestrator | Thursday 19 June 2025 10:35:40 +0000 (0:00:00.389) 0:00:20.354 ********* 2025-06-19 10:37:08.103250 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.103260 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.103271 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.103281 | orchestrator | 2025-06-19 10:37:08.103292 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-19 10:37:08.103302 | orchestrator | Thursday 19 June 2025 10:35:40 +0000 (0:00:00.290) 0:00:20.645 ********* 2025-06-19 10:37:08.103311 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:37:08.103321 | orchestrator | 2025-06-19 10:37:08.103331 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-19 10:37:08.103340 | orchestrator | Thursday 19 June 2025 10:35:41 +0000 (0:00:00.778) 0:00:21.423 ********* 2025-06-19 10:37:08.103361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:37:08.103386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:37:08.103406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:37:08.103423 | orchestrator | 2025-06-19 10:37:08.103437 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-19 10:37:08.103447 | orchestrator | Thursday 19 June 2025 10:35:43 +0000 (0:00:01.804) 0:00:23.228 ********* 2025-06-19 10:37:08.103458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-19 10:37:08.103469 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.103499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-19 10:37:08.103511 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.103521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-19 10:37:08.103538 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.103547 | orchestrator | 2025-06-19 10:37:08.103557 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-19 10:37:08.103571 | orchestrator | Thursday 19 June 2025 10:35:44 +0000 (0:00:00.760) 0:00:23.988 ********* 2025-06-19 10:37:08.103587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-19 10:37:08.103598 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.103615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-19 10:37:08.103631 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.103647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-19 10:37:08.103658 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.103668 | orchestrator | 2025-06-19 10:37:08.103677 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-19 10:37:08.103687 | orchestrator | Thursday 19 June 2025 10:35:45 +0000 (0:00:01.049) 0:00:25.038 ********* 2025-06-19 10:37:08.103704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:37:08.103730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:37:08.103749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-19 10:37:08.103769 | orchestrator | 2025-06-19 10:37:08.103779 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-19 10:37:08.103788 | orchestrator | Thursday 19 June 2025 10:35:46 +0000 (0:00:01.292) 0:00:26.330 ********* 2025-06-19 10:37:08.103798 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:37:08.103807 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:37:08.103817 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:37:08.103826 | orchestrator | 2025-06-19 10:37:08.103840 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-19 10:37:08.103850 | orchestrator | Thursday 19 June 2025 10:35:46 +0000 (0:00:00.294) 0:00:26.624 ********* 2025-06-19 10:37:08.103859 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:37:08.103869 | orchestrator | 2025-06-19 10:37:08.103878 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-19 10:37:08.103888 | orchestrator | Thursday 19 June 2025 10:35:47 +0000 (0:00:00.670) 0:00:27.295 ********* 2025-06-19 10:37:08.103898 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:37:08.103907 | orchestrator | 2025-06-19 10:37:08.103942 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-19 10:37:08.103959 | orchestrator | Thursday 19 June 2025 10:35:49 +0000 (0:00:02.095) 0:00:29.390 ********* 2025-06-19 10:37:08.103977 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:37:08.103995 | orchestrator | 2025-06-19 10:37:08.104005 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-19 10:37:08.104014 | orchestrator | Thursday 19 June 2025 10:35:51 +0000 (0:00:02.111) 0:00:31.502 ********* 2025-06-19 10:37:08.104024 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:37:08.104034 | orchestrator | 2025-06-19 10:37:08.104043 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-19 10:37:08.104053 | orchestrator | Thursday 19 June 2025 10:36:07 +0000 (0:00:15.269) 0:00:46.771 ********* 2025-06-19 10:37:08.104062 | orchestrator | 2025-06-19 10:37:08.104072 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-19 10:37:08.104088 | orchestrator | Thursday 19 June 2025 10:36:07 +0000 (0:00:00.069) 0:00:46.841 ********* 2025-06-19 10:37:08.104098 | orchestrator | 2025-06-19 10:37:08.104108 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-19 10:37:08.104117 | orchestrator | Thursday 19 June 2025 10:36:07 +0000 (0:00:00.065) 0:00:46.907 ********* 2025-06-19 10:37:08.104127 | orchestrator | 2025-06-19 10:37:08.104136 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-19 10:37:08.104146 | orchestrator | Thursday 19 June 2025 10:36:07 +0000 (0:00:00.066) 0:00:46.974 ********* 2025-06-19 10:37:08.104155 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:37:08.104165 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:37:08.104175 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:37:08.104184 | orchestrator | 2025-06-19 10:37:08.104194 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:37:08.104203 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-19 10:37:08.104214 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-19 10:37:08.104223 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-19 10:37:08.104233 | orchestrator | 2025-06-19 10:37:08.104243 | orchestrator | 2025-06-19 10:37:08.104252 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:37:08.104262 | orchestrator | Thursday 19 June 2025 10:37:06 +0000 (0:00:59.509) 0:01:46.484 ********* 2025-06-19 10:37:08.104271 | orchestrator | =============================================================================== 2025-06-19 10:37:08.104281 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.51s 2025-06-19 10:37:08.104290 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.27s 2025-06-19 10:37:08.104300 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.29s 2025-06-19 10:37:08.104309 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.11s 2025-06-19 10:37:08.104325 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.10s 2025-06-19 10:37:08.104335 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.01s 2025-06-19 10:37:08.104344 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.80s 2025-06-19 10:37:08.104354 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.72s 2025-06-19 10:37:08.104363 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.66s 2025-06-19 10:37:08.104373 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.29s 2025-06-19 10:37:08.104382 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.10s 2025-06-19 10:37:08.104392 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.05s 2025-06-19 10:37:08.104401 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.78s 2025-06-19 10:37:08.104411 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.76s 2025-06-19 10:37:08.104421 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-06-19 10:37:08.104430 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.67s 2025-06-19 10:37:08.104439 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.54s 2025-06-19 10:37:08.104449 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-06-19 10:37:08.104458 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-06-19 10:37:08.104468 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2025-06-19 10:37:08.104488 | orchestrator | 2025-06-19 10:37:08 | INFO  | Task 62d50922-0c76-44ab-a6bd-486ee8571dd7 is in state STARTED 2025-06-19 10:37:08.104499 | orchestrator | 2025-06-19 10:37:08 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:08.104509 | orchestrator | 2025-06-19 10:37:08 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:11.132379 | orchestrator | 2025-06-19 10:37:11 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:11.132609 | orchestrator | 2025-06-19 10:37:11 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:11.133367 | orchestrator | 2025-06-19 10:37:11 | INFO  | Task 62d50922-0c76-44ab-a6bd-486ee8571dd7 is in state STARTED 2025-06-19 10:37:11.134543 | orchestrator | 2025-06-19 10:37:11 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:11.134642 | orchestrator | 2025-06-19 10:37:11 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:14.169305 | orchestrator | 2025-06-19 10:37:14 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:14.170661 | orchestrator | 2025-06-19 10:37:14 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:14.172373 | orchestrator | 2025-06-19 10:37:14 | INFO  | Task 62d50922-0c76-44ab-a6bd-486ee8571dd7 is in state STARTED 2025-06-19 10:37:14.173697 | orchestrator | 2025-06-19 10:37:14 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:14.173723 | orchestrator | 2025-06-19 10:37:14 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:17.225962 | orchestrator | 2025-06-19 10:37:17 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:17.227099 | orchestrator | 2025-06-19 10:37:17 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:17.228488 | orchestrator | 2025-06-19 10:37:17 | INFO  | Task 62d50922-0c76-44ab-a6bd-486ee8571dd7 is in state STARTED 2025-06-19 10:37:17.230336 | orchestrator | 2025-06-19 10:37:17 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:17.230808 | orchestrator | 2025-06-19 10:37:17 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:20.275067 | orchestrator | 2025-06-19 10:37:20 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:20.277778 | orchestrator | 2025-06-19 10:37:20 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:20.281030 | orchestrator | 2025-06-19 10:37:20 | INFO  | Task 62d50922-0c76-44ab-a6bd-486ee8571dd7 is in state STARTED 2025-06-19 10:37:20.282112 | orchestrator | 2025-06-19 10:37:20 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:20.282215 | orchestrator | 2025-06-19 10:37:20 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:23.335125 | orchestrator | 2025-06-19 10:37:23 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:23.336024 | orchestrator | 2025-06-19 10:37:23 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:23.337153 | orchestrator | 2025-06-19 10:37:23 | INFO  | Task 62d50922-0c76-44ab-a6bd-486ee8571dd7 is in state STARTED 2025-06-19 10:37:23.337178 | orchestrator | 2025-06-19 10:37:23 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:23.337310 | orchestrator | 2025-06-19 10:37:23 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:26.375832 | orchestrator | 2025-06-19 10:37:26 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:26.381194 | orchestrator | 2025-06-19 10:37:26 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:26.381247 | orchestrator | 2025-06-19 10:37:26 | INFO  | Task 62d50922-0c76-44ab-a6bd-486ee8571dd7 is in state STARTED 2025-06-19 10:37:26.381403 | orchestrator | 2025-06-19 10:37:26 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:26.382128 | orchestrator | 2025-06-19 10:37:26 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:29.421120 | orchestrator | 2025-06-19 10:37:29 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:29.422195 | orchestrator | 2025-06-19 10:37:29 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:29.422657 | orchestrator | 2025-06-19 10:37:29 | INFO  | Task 62d50922-0c76-44ab-a6bd-486ee8571dd7 is in state SUCCESS 2025-06-19 10:37:29.423980 | orchestrator | 2025-06-19 10:37:29 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:29.424072 | orchestrator | 2025-06-19 10:37:29 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:32.463785 | orchestrator | 2025-06-19 10:37:32 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:37:32.463928 | orchestrator | 2025-06-19 10:37:32 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:32.464608 | orchestrator | 2025-06-19 10:37:32 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:32.467282 | orchestrator | 2025-06-19 10:37:32 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:32.468038 | orchestrator | 2025-06-19 10:37:32 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:37:32.468157 | orchestrator | 2025-06-19 10:37:32 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:35.511956 | orchestrator | 2025-06-19 10:37:35 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:37:35.512931 | orchestrator | 2025-06-19 10:37:35 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:35.515182 | orchestrator | 2025-06-19 10:37:35 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:35.516516 | orchestrator | 2025-06-19 10:37:35 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:35.518746 | orchestrator | 2025-06-19 10:37:35 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:37:35.518983 | orchestrator | 2025-06-19 10:37:35 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:38.577911 | orchestrator | 2025-06-19 10:37:38 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:37:38.582823 | orchestrator | 2025-06-19 10:37:38 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:38.586317 | orchestrator | 2025-06-19 10:37:38 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:38.586575 | orchestrator | 2025-06-19 10:37:38 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:38.588692 | orchestrator | 2025-06-19 10:37:38 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:37:38.591173 | orchestrator | 2025-06-19 10:37:38 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:41.621992 | orchestrator | 2025-06-19 10:37:41 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:37:41.622191 | orchestrator | 2025-06-19 10:37:41 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:41.623067 | orchestrator | 2025-06-19 10:37:41 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:41.623441 | orchestrator | 2025-06-19 10:37:41 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:41.625064 | orchestrator | 2025-06-19 10:37:41 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:37:41.625093 | orchestrator | 2025-06-19 10:37:41 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:44.644346 | orchestrator | 2025-06-19 10:37:44 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:37:44.644463 | orchestrator | 2025-06-19 10:37:44 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:44.644879 | orchestrator | 2025-06-19 10:37:44 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:44.645484 | orchestrator | 2025-06-19 10:37:44 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:44.646217 | orchestrator | 2025-06-19 10:37:44 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:37:44.646256 | orchestrator | 2025-06-19 10:37:44 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:47.678739 | orchestrator | 2025-06-19 10:37:47 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:37:47.679883 | orchestrator | 2025-06-19 10:37:47 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:47.681886 | orchestrator | 2025-06-19 10:37:47 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:47.683104 | orchestrator | 2025-06-19 10:37:47 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:47.684436 | orchestrator | 2025-06-19 10:37:47 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:37:47.684487 | orchestrator | 2025-06-19 10:37:47 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:50.724363 | orchestrator | 2025-06-19 10:37:50 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:37:50.725424 | orchestrator | 2025-06-19 10:37:50 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:50.727486 | orchestrator | 2025-06-19 10:37:50 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:50.730107 | orchestrator | 2025-06-19 10:37:50 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:50.731573 | orchestrator | 2025-06-19 10:37:50 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:37:50.731599 | orchestrator | 2025-06-19 10:37:50 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:53.778823 | orchestrator | 2025-06-19 10:37:53 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:37:53.780118 | orchestrator | 2025-06-19 10:37:53 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:53.782257 | orchestrator | 2025-06-19 10:37:53 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:53.785755 | orchestrator | 2025-06-19 10:37:53 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:53.787695 | orchestrator | 2025-06-19 10:37:53 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:37:53.787748 | orchestrator | 2025-06-19 10:37:53 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:56.825089 | orchestrator | 2025-06-19 10:37:56 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:37:56.826186 | orchestrator | 2025-06-19 10:37:56 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:56.828247 | orchestrator | 2025-06-19 10:37:56 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:56.829448 | orchestrator | 2025-06-19 10:37:56 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:56.831390 | orchestrator | 2025-06-19 10:37:56 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:37:56.831413 | orchestrator | 2025-06-19 10:37:56 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:37:59.877326 | orchestrator | 2025-06-19 10:37:59 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:37:59.878549 | orchestrator | 2025-06-19 10:37:59 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:37:59.880574 | orchestrator | 2025-06-19 10:37:59 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state STARTED 2025-06-19 10:37:59.881894 | orchestrator | 2025-06-19 10:37:59 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:37:59.883978 | orchestrator | 2025-06-19 10:37:59 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:37:59.884002 | orchestrator | 2025-06-19 10:37:59 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:02.925562 | orchestrator | 2025-06-19 10:38:02 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:02.927111 | orchestrator | 2025-06-19 10:38:02 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:02.930356 | orchestrator | 2025-06-19 10:38:02 | INFO  | Task 7d06717f-1d43-4a7d-bee5-54b2c742051a is in state SUCCESS 2025-06-19 10:38:02.932304 | orchestrator | 2025-06-19 10:38:02.932349 | orchestrator | 2025-06-19 10:38:02.932362 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:38:02.932375 | orchestrator | 2025-06-19 10:38:02.932386 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:38:02.932397 | orchestrator | Thursday 19 June 2025 10:37:11 +0000 (0:00:00.261) 0:00:00.261 ********* 2025-06-19 10:38:02.932408 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:38:02.932421 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:38:02.932575 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:38:02.932592 | orchestrator | 2025-06-19 10:38:02.932604 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:38:02.932615 | orchestrator | Thursday 19 June 2025 10:37:12 +0000 (0:00:00.271) 0:00:00.532 ********* 2025-06-19 10:38:02.932626 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-19 10:38:02.933003 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-19 10:38:02.933017 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-19 10:38:02.933028 | orchestrator | 2025-06-19 10:38:02.933055 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-19 10:38:02.933066 | orchestrator | 2025-06-19 10:38:02.933077 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-19 10:38:02.933088 | orchestrator | Thursday 19 June 2025 10:37:12 +0000 (0:00:00.560) 0:00:01.093 ********* 2025-06-19 10:38:02.933099 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:38:02.933110 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:38:02.933120 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:38:02.933131 | orchestrator | 2025-06-19 10:38:02.933164 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:38:02.933176 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:02.933189 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:02.933199 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:02.933211 | orchestrator | 2025-06-19 10:38:02.933222 | orchestrator | 2025-06-19 10:38:02.933233 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:38:02.933243 | orchestrator | Thursday 19 June 2025 10:37:28 +0000 (0:00:15.779) 0:00:16.873 ********* 2025-06-19 10:38:02.933254 | orchestrator | =============================================================================== 2025-06-19 10:38:02.933265 | orchestrator | Waiting for Keystone public port to be UP ------------------------------ 15.78s 2025-06-19 10:38:02.933275 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2025-06-19 10:38:02.933286 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-06-19 10:38:02.933297 | orchestrator | 2025-06-19 10:38:02.933307 | orchestrator | 2025-06-19 10:38:02.933318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:38:02.933328 | orchestrator | 2025-06-19 10:38:02.933339 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:38:02.933349 | orchestrator | Thursday 19 June 2025 10:35:20 +0000 (0:00:00.272) 0:00:00.272 ********* 2025-06-19 10:38:02.933360 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:38:02.933371 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:38:02.933381 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:38:02.933392 | orchestrator | 2025-06-19 10:38:02.933402 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:38:02.933413 | orchestrator | Thursday 19 June 2025 10:35:20 +0000 (0:00:00.290) 0:00:00.563 ********* 2025-06-19 10:38:02.933423 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-19 10:38:02.933434 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-19 10:38:02.933445 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-19 10:38:02.933455 | orchestrator | 2025-06-19 10:38:02.933466 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-19 10:38:02.933477 | orchestrator | 2025-06-19 10:38:02.933487 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-19 10:38:02.933498 | orchestrator | Thursday 19 June 2025 10:35:21 +0000 (0:00:00.398) 0:00:00.961 ********* 2025-06-19 10:38:02.933509 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:38:02.933519 | orchestrator | 2025-06-19 10:38:02.933530 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-19 10:38:02.933541 | orchestrator | Thursday 19 June 2025 10:35:21 +0000 (0:00:00.557) 0:00:01.519 ********* 2025-06-19 10:38:02.933595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.933629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.933645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.933660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.933675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.933688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.933715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.933735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.933749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.933762 | orchestrator | 2025-06-19 10:38:02.933776 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-19 10:38:02.933789 | orchestrator | Thursday 19 June 2025 10:35:23 +0000 (0:00:01.833) 0:00:03.352 ********* 2025-06-19 10:38:02.933801 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-19 10:38:02.933847 | orchestrator | 2025-06-19 10:38:02.933860 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-19 10:38:02.933873 | orchestrator | Thursday 19 June 2025 10:35:24 +0000 (0:00:00.792) 0:00:04.145 ********* 2025-06-19 10:38:02.933886 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:38:02.933898 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:38:02.933910 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:38:02.933922 | orchestrator | 2025-06-19 10:38:02.933935 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-19 10:38:02.933947 | orchestrator | Thursday 19 June 2025 10:35:24 +0000 (0:00:00.465) 0:00:04.610 ********* 2025-06-19 10:38:02.933960 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:38:02.933972 | orchestrator | 2025-06-19 10:38:02.933983 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-19 10:38:02.933993 | orchestrator | Thursday 19 June 2025 10:35:25 +0000 (0:00:00.685) 0:00:05.296 ********* 2025-06-19 10:38:02.934004 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:38:02.934015 | orchestrator | 2025-06-19 10:38:02.934076 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-19 10:38:02.934087 | orchestrator | Thursday 19 June 2025 10:35:26 +0000 (0:00:00.541) 0:00:05.837 ********* 2025-06-19 10:38:02.934099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.934136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.934150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.934162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934192 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934251 | orchestrator | 2025-06-19 10:38:02.934262 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-19 10:38:02.934273 | orchestrator | Thursday 19 June 2025 10:35:29 +0000 (0:00:03.348) 0:00:09.186 ********* 2025-06-19 10:38:02.934285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-19 10:38:02.934296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.934314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:38:02.934326 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.934345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-19 10:38:02.934358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.934369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:38:02.934381 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:38:02.934483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-19 10:38:02.934514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.934534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:38:02.934546 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:38:02.934557 | orchestrator | 2025-06-19 10:38:02.934568 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-19 10:38:02.934579 | orchestrator | Thursday 19 June 2025 10:35:30 +0000 (0:00:00.830) 0:00:10.017 ********* 2025-06-19 10:38:02.934595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-19 10:38:02.934607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.934618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:38:02.934636 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.934648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-19 10:38:02.934666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.934683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:38:02.934694 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:38:02.934706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-19 10:38:02.934718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.934735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-19 10:38:02.934746 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:38:02.934757 | orchestrator | 2025-06-19 10:38:02.934768 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-19 10:38:02.934779 | orchestrator | Thursday 19 June 2025 10:35:31 +0000 (0:00:00.827) 0:00:10.844 ********* 2025-06-19 10:38:02.934798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.934864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.934880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.934899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.934985 | orchestrator | 2025-06-19 10:38:02.934996 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-19 10:38:02.935007 | orchestrator | Thursday 19 June 2025 10:35:34 +0000 (0:00:03.551) 0:00:14.395 ********* 2025-06-19 10:38:02.935018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.935030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.935172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.935192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.935204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.935224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.935235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.935247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.935264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.935276 | orchestrator | 2025-06-19 10:38:02.935287 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-19 10:38:02.935297 | orchestrator | Thursday 19 June 2025 10:35:40 +0000 (0:00:05.586) 0:00:19.982 ********* 2025-06-19 10:38:02.935308 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:38:02.935325 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:38:02.935336 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:38:02.935346 | orchestrator | 2025-06-19 10:38:02.935357 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-19 10:38:02.935368 | orchestrator | Thursday 19 June 2025 10:35:41 +0000 (0:00:01.453) 0:00:21.436 ********* 2025-06-19 10:38:02.935379 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:38:02.935389 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.935400 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:38:02.935418 | orchestrator | 2025-06-19 10:38:02.935429 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-19 10:38:02.935440 | orchestrator | Thursday 19 June 2025 10:35:42 +0000 (0:00:00.677) 0:00:22.113 ********* 2025-06-19 10:38:02.935451 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.935461 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:38:02.935472 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:38:02.935482 | orchestrator | 2025-06-19 10:38:02.935493 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-19 10:38:02.935503 | orchestrator | Thursday 19 June 2025 10:35:42 +0000 (0:00:00.293) 0:00:22.407 ********* 2025-06-19 10:38:02.935514 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.935525 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:38:02.935535 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:38:02.935546 | orchestrator | 2025-06-19 10:38:02.935557 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-19 10:38:02.935567 | orchestrator | Thursday 19 June 2025 10:35:43 +0000 (0:00:00.494) 0:00:22.902 ********* 2025-06-19 10:38:02.935579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.935591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.935610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.935631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.935650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.935662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-19 10:38:02.935673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.935685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.935702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.935713 | orchestrator | 2025-06-19 10:38:02.935724 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-19 10:38:02.935741 | orchestrator | Thursday 19 June 2025 10:35:45 +0000 (0:00:02.587) 0:00:25.489 ********* 2025-06-19 10:38:02.935752 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.935763 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:38:02.935773 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:38:02.935784 | orchestrator | 2025-06-19 10:38:02.935795 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-19 10:38:02.935832 | orchestrator | Thursday 19 June 2025 10:35:46 +0000 (0:00:00.346) 0:00:25.836 ********* 2025-06-19 10:38:02.935844 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-19 10:38:02.935856 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-19 10:38:02.935867 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-19 10:38:02.935877 | orchestrator | 2025-06-19 10:38:02.935888 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-19 10:38:02.935899 | orchestrator | Thursday 19 June 2025 10:35:47 +0000 (0:00:01.757) 0:00:27.594 ********* 2025-06-19 10:38:02.935909 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:38:02.935920 | orchestrator | 2025-06-19 10:38:02.935930 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-19 10:38:02.935941 | orchestrator | Thursday 19 June 2025 10:35:49 +0000 (0:00:01.072) 0:00:28.666 ********* 2025-06-19 10:38:02.935952 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.935962 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:38:02.935972 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:38:02.935983 | orchestrator | 2025-06-19 10:38:02.935993 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-19 10:38:02.936004 | orchestrator | Thursday 19 June 2025 10:35:49 +0000 (0:00:00.748) 0:00:29.415 ********* 2025-06-19 10:38:02.936015 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-19 10:38:02.936025 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:38:02.936036 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-19 10:38:02.936046 | orchestrator | 2025-06-19 10:38:02.936057 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-19 10:38:02.936068 | orchestrator | Thursday 19 June 2025 10:35:50 +0000 (0:00:01.051) 0:00:30.466 ********* 2025-06-19 10:38:02.936078 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:38:02.936089 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:38:02.936100 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:38:02.936110 | orchestrator | 2025-06-19 10:38:02.936121 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-19 10:38:02.936132 | orchestrator | Thursday 19 June 2025 10:35:51 +0000 (0:00:00.321) 0:00:30.787 ********* 2025-06-19 10:38:02.936142 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-19 10:38:02.936153 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-19 10:38:02.936164 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-19 10:38:02.936174 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-19 10:38:02.936185 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-19 10:38:02.936195 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-19 10:38:02.936303 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-19 10:38:02.936317 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-19 10:38:02.936328 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-19 10:38:02.936347 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-19 10:38:02.936358 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-19 10:38:02.936368 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-19 10:38:02.936379 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-19 10:38:02.936390 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-19 10:38:02.936400 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-19 10:38:02.936411 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-19 10:38:02.936422 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-19 10:38:02.936433 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-19 10:38:02.936452 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-19 10:38:02.936463 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-19 10:38:02.936474 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-19 10:38:02.936485 | orchestrator | 2025-06-19 10:38:02.936496 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-19 10:38:02.936507 | orchestrator | Thursday 19 June 2025 10:35:59 +0000 (0:00:08.771) 0:00:39.559 ********* 2025-06-19 10:38:02.936518 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-19 10:38:02.936529 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-19 10:38:02.936540 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-19 10:38:02.936556 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-19 10:38:02.936568 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-19 10:38:02.936579 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-19 10:38:02.936589 | orchestrator | 2025-06-19 10:38:02.936600 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-19 10:38:02.936611 | orchestrator | Thursday 19 June 2025 10:36:02 +0000 (0:00:02.708) 0:00:42.268 ********* 2025-06-19 10:38:02.936623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.936636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.936662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-19 10:38:02.936674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.936691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.936703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-19 10:38:02.936715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.936733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.936745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-19 10:38:02.936756 | orchestrator | 2025-06-19 10:38:02.936767 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-19 10:38:02.936778 | orchestrator | Thursday 19 June 2025 10:36:04 +0000 (0:00:02.202) 0:00:44.470 ********* 2025-06-19 10:38:02.936789 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.936800 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:38:02.936828 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:38:02.936840 | orchestrator | 2025-06-19 10:38:02.936857 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-19 10:38:02.936868 | orchestrator | Thursday 19 June 2025 10:36:05 +0000 (0:00:00.282) 0:00:44.753 ********* 2025-06-19 10:38:02.936879 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:38:02.936890 | orchestrator | 2025-06-19 10:38:02.936901 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-19 10:38:02.936911 | orchestrator | Thursday 19 June 2025 10:36:07 +0000 (0:00:02.134) 0:00:46.887 ********* 2025-06-19 10:38:02.936922 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:38:02.936935 | orchestrator | 2025-06-19 10:38:02.936948 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-19 10:38:02.936960 | orchestrator | Thursday 19 June 2025 10:36:09 +0000 (0:00:02.035) 0:00:48.923 ********* 2025-06-19 10:38:02.936972 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:38:02.936985 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:38:02.936997 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:38:02.937009 | orchestrator | 2025-06-19 10:38:02.937021 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-19 10:38:02.937039 | orchestrator | Thursday 19 June 2025 10:36:10 +0000 (0:00:01.022) 0:00:49.945 ********* 2025-06-19 10:38:02.937131 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:38:02.937174 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:38:02.937188 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:38:02.937201 | orchestrator | 2025-06-19 10:38:02.937214 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-19 10:38:02.937226 | orchestrator | Thursday 19 June 2025 10:36:10 +0000 (0:00:00.279) 0:00:50.225 ********* 2025-06-19 10:38:02.937239 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.937251 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:38:02.937263 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:38:02.937276 | orchestrator | 2025-06-19 10:38:02.937288 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-19 10:38:02.937307 | orchestrator | Thursday 19 June 2025 10:36:10 +0000 (0:00:00.350) 0:00:50.576 ********* 2025-06-19 10:38:02.937318 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:38:02.937329 | orchestrator | 2025-06-19 10:38:02.937340 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-19 10:38:02.937350 | orchestrator | Thursday 19 June 2025 10:36:24 +0000 (0:00:13.171) 0:01:03.748 ********* 2025-06-19 10:38:02.937361 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:38:02.937372 | orchestrator | 2025-06-19 10:38:02.937382 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-19 10:38:02.937393 | orchestrator | Thursday 19 June 2025 10:36:33 +0000 (0:00:09.509) 0:01:13.257 ********* 2025-06-19 10:38:02.937403 | orchestrator | 2025-06-19 10:38:02.937414 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-19 10:38:02.937425 | orchestrator | Thursday 19 June 2025 10:36:33 +0000 (0:00:00.066) 0:01:13.324 ********* 2025-06-19 10:38:02.937435 | orchestrator | 2025-06-19 10:38:02.937446 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-19 10:38:02.937457 | orchestrator | Thursday 19 June 2025 10:36:33 +0000 (0:00:00.061) 0:01:13.386 ********* 2025-06-19 10:38:02.937467 | orchestrator | 2025-06-19 10:38:02.937478 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-19 10:38:02.937488 | orchestrator | Thursday 19 June 2025 10:36:33 +0000 (0:00:00.068) 0:01:13.454 ********* 2025-06-19 10:38:02.937499 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:38:02.937509 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:38:02.937520 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:38:02.937530 | orchestrator | 2025-06-19 10:38:02.937541 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-19 10:38:02.937635 | orchestrator | Thursday 19 June 2025 10:36:57 +0000 (0:00:23.751) 0:01:37.206 ********* 2025-06-19 10:38:02.937647 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:38:02.937658 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:38:02.937669 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:38:02.937680 | orchestrator | 2025-06-19 10:38:02.937691 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-19 10:38:02.937702 | orchestrator | Thursday 19 June 2025 10:37:08 +0000 (0:00:10.530) 0:01:47.736 ********* 2025-06-19 10:38:02.937713 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:38:02.937724 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:38:02.937734 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:38:02.937745 | orchestrator | 2025-06-19 10:38:02.937756 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-19 10:38:02.937766 | orchestrator | Thursday 19 June 2025 10:37:20 +0000 (0:00:12.310) 0:02:00.047 ********* 2025-06-19 10:38:02.937777 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:38:02.937788 | orchestrator | 2025-06-19 10:38:02.937799 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-19 10:38:02.937913 | orchestrator | Thursday 19 June 2025 10:37:21 +0000 (0:00:00.897) 0:02:00.944 ********* 2025-06-19 10:38:02.937927 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:38:02.937938 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:38:02.937949 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:38:02.937959 | orchestrator | 2025-06-19 10:38:02.937970 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-19 10:38:02.937981 | orchestrator | Thursday 19 June 2025 10:37:22 +0000 (0:00:00.945) 0:02:01.890 ********* 2025-06-19 10:38:02.937992 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:38:02.938002 | orchestrator | 2025-06-19 10:38:02.938013 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-19 10:38:02.938071 | orchestrator | Thursday 19 June 2025 10:37:24 +0000 (0:00:01.960) 0:02:03.850 ********* 2025-06-19 10:38:02.938081 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-19 10:38:02.938099 | orchestrator | 2025-06-19 10:38:02.938109 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-19 10:38:02.938118 | orchestrator | Thursday 19 June 2025 10:37:34 +0000 (0:00:10.246) 0:02:14.097 ********* 2025-06-19 10:38:02.938128 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-19 10:38:02.938138 | orchestrator | 2025-06-19 10:38:02.938157 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-19 10:38:02.938167 | orchestrator | Thursday 19 June 2025 10:37:48 +0000 (0:00:14.073) 0:02:28.170 ********* 2025-06-19 10:38:02.938177 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-19 10:38:02.938187 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-19 10:38:02.938196 | orchestrator | 2025-06-19 10:38:02.938206 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-19 10:38:02.938216 | orchestrator | Thursday 19 June 2025 10:37:54 +0000 (0:00:06.231) 0:02:34.401 ********* 2025-06-19 10:38:02.938225 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.938235 | orchestrator | 2025-06-19 10:38:02.938244 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-19 10:38:02.938346 | orchestrator | Thursday 19 June 2025 10:37:54 +0000 (0:00:00.149) 0:02:34.550 ********* 2025-06-19 10:38:02.938359 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.938369 | orchestrator | 2025-06-19 10:38:02.938386 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-19 10:38:02.938395 | orchestrator | Thursday 19 June 2025 10:37:55 +0000 (0:00:00.114) 0:02:34.665 ********* 2025-06-19 10:38:02.938405 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.938414 | orchestrator | 2025-06-19 10:38:02.938424 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-19 10:38:02.938433 | orchestrator | Thursday 19 June 2025 10:37:55 +0000 (0:00:00.315) 0:02:34.981 ********* 2025-06-19 10:38:02.938443 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.938452 | orchestrator | 2025-06-19 10:38:02.938462 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-19 10:38:02.938471 | orchestrator | Thursday 19 June 2025 10:37:55 +0000 (0:00:00.333) 0:02:35.314 ********* 2025-06-19 10:38:02.938480 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:38:02.938490 | orchestrator | 2025-06-19 10:38:02.938499 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-19 10:38:02.938509 | orchestrator | Thursday 19 June 2025 10:37:58 +0000 (0:00:03.302) 0:02:38.617 ********* 2025-06-19 10:38:02.938518 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:38:02.938527 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:38:02.938537 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:38:02.938546 | orchestrator | 2025-06-19 10:38:02.938555 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:38:02.938566 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-19 10:38:02.938577 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-19 10:38:02.938587 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-19 10:38:02.938596 | orchestrator | 2025-06-19 10:38:02.938605 | orchestrator | 2025-06-19 10:38:02.938615 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:38:02.938625 | orchestrator | Thursday 19 June 2025 10:37:59 +0000 (0:00:00.586) 0:02:39.204 ********* 2025-06-19 10:38:02.938634 | orchestrator | =============================================================================== 2025-06-19 10:38:02.938643 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 23.75s 2025-06-19 10:38:02.938660 | orchestrator | service-ks-register : keystone | Creating services --------------------- 14.07s 2025-06-19 10:38:02.938670 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.17s 2025-06-19 10:38:02.938679 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.31s 2025-06-19 10:38:02.938688 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.53s 2025-06-19 10:38:02.938698 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.25s 2025-06-19 10:38:02.938707 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.51s 2025-06-19 10:38:02.938716 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.77s 2025-06-19 10:38:02.938726 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.23s 2025-06-19 10:38:02.938735 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.59s 2025-06-19 10:38:02.938744 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.55s 2025-06-19 10:38:02.938754 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.35s 2025-06-19 10:38:02.938763 | orchestrator | keystone : Creating default user role ----------------------------------- 3.30s 2025-06-19 10:38:02.938772 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.71s 2025-06-19 10:38:02.938782 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.59s 2025-06-19 10:38:02.938791 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.20s 2025-06-19 10:38:02.938800 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.13s 2025-06-19 10:38:02.938827 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.04s 2025-06-19 10:38:02.938838 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.96s 2025-06-19 10:38:02.938847 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.83s 2025-06-19 10:38:02.938863 | orchestrator | 2025-06-19 10:38:02 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:02.938873 | orchestrator | 2025-06-19 10:38:02 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:02.938883 | orchestrator | 2025-06-19 10:38:02 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:38:02.938893 | orchestrator | 2025-06-19 10:38:02 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:05.993946 | orchestrator | 2025-06-19 10:38:05 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:05.996561 | orchestrator | 2025-06-19 10:38:05 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:05.998591 | orchestrator | 2025-06-19 10:38:05 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:06.000759 | orchestrator | 2025-06-19 10:38:05 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:06.002296 | orchestrator | 2025-06-19 10:38:06 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:38:06.002634 | orchestrator | 2025-06-19 10:38:06 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:09.048569 | orchestrator | 2025-06-19 10:38:09 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:09.051717 | orchestrator | 2025-06-19 10:38:09 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:09.054128 | orchestrator | 2025-06-19 10:38:09 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:09.055543 | orchestrator | 2025-06-19 10:38:09 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:09.057031 | orchestrator | 2025-06-19 10:38:09 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:38:09.057425 | orchestrator | 2025-06-19 10:38:09 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:12.094199 | orchestrator | 2025-06-19 10:38:12 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:12.095558 | orchestrator | 2025-06-19 10:38:12 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:12.097316 | orchestrator | 2025-06-19 10:38:12 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:12.099145 | orchestrator | 2025-06-19 10:38:12 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:12.100629 | orchestrator | 2025-06-19 10:38:12 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state STARTED 2025-06-19 10:38:12.100720 | orchestrator | 2025-06-19 10:38:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:15.123611 | orchestrator | 2025-06-19 10:38:15 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:15.125915 | orchestrator | 2025-06-19 10:38:15 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:15.126961 | orchestrator | 2025-06-19 10:38:15 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:15.127720 | orchestrator | 2025-06-19 10:38:15 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:15.130450 | orchestrator | 2025-06-19 10:38:15 | INFO  | Task 0d57b8e1-add7-48e3-9294-e725030e8431 is in state SUCCESS 2025-06-19 10:38:15.130478 | orchestrator | 2025-06-19 10:38:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:18.159683 | orchestrator | 2025-06-19 10:38:18 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:18.164222 | orchestrator | 2025-06-19 10:38:18 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:18.165651 | orchestrator | 2025-06-19 10:38:18 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:18.167100 | orchestrator | 2025-06-19 10:38:18 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:18.169557 | orchestrator | 2025-06-19 10:38:18 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:18.169581 | orchestrator | 2025-06-19 10:38:18 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:21.215856 | orchestrator | 2025-06-19 10:38:21 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:21.219670 | orchestrator | 2025-06-19 10:38:21 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:21.220021 | orchestrator | 2025-06-19 10:38:21 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:21.220620 | orchestrator | 2025-06-19 10:38:21 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:21.221361 | orchestrator | 2025-06-19 10:38:21 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:21.221385 | orchestrator | 2025-06-19 10:38:21 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:24.257326 | orchestrator | 2025-06-19 10:38:24 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:24.257603 | orchestrator | 2025-06-19 10:38:24 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:24.258164 | orchestrator | 2025-06-19 10:38:24 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:24.259555 | orchestrator | 2025-06-19 10:38:24 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:24.260150 | orchestrator | 2025-06-19 10:38:24 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:24.260173 | orchestrator | 2025-06-19 10:38:24 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:27.294583 | orchestrator | 2025-06-19 10:38:27 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:27.294683 | orchestrator | 2025-06-19 10:38:27 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:27.297328 | orchestrator | 2025-06-19 10:38:27 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:27.299433 | orchestrator | 2025-06-19 10:38:27 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:27.302245 | orchestrator | 2025-06-19 10:38:27 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:27.302272 | orchestrator | 2025-06-19 10:38:27 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:30.348489 | orchestrator | 2025-06-19 10:38:30 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:30.348591 | orchestrator | 2025-06-19 10:38:30 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:30.352651 | orchestrator | 2025-06-19 10:38:30 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:30.354573 | orchestrator | 2025-06-19 10:38:30 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:30.355275 | orchestrator | 2025-06-19 10:38:30 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:30.355388 | orchestrator | 2025-06-19 10:38:30 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:33.395637 | orchestrator | 2025-06-19 10:38:33 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:33.397415 | orchestrator | 2025-06-19 10:38:33 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:33.399231 | orchestrator | 2025-06-19 10:38:33 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:33.401127 | orchestrator | 2025-06-19 10:38:33 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:33.402455 | orchestrator | 2025-06-19 10:38:33 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:33.402883 | orchestrator | 2025-06-19 10:38:33 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:36.449144 | orchestrator | 2025-06-19 10:38:36 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:36.450824 | orchestrator | 2025-06-19 10:38:36 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:36.451589 | orchestrator | 2025-06-19 10:38:36 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:36.452373 | orchestrator | 2025-06-19 10:38:36 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state STARTED 2025-06-19 10:38:36.453190 | orchestrator | 2025-06-19 10:38:36 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:36.453414 | orchestrator | 2025-06-19 10:38:36 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:39.486463 | orchestrator | 2025-06-19 10:38:39 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:39.486597 | orchestrator | 2025-06-19 10:38:39 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:39.487206 | orchestrator | 2025-06-19 10:38:39 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:39.487347 | orchestrator | 2025-06-19 10:38:39 | INFO  | Task 45056b76-0243-48aa-af7c-7c9502a63e7e is in state SUCCESS 2025-06-19 10:38:39.488114 | orchestrator | 2025-06-19 10:38:39.488153 | orchestrator | 2025-06-19 10:38:39.488165 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:38:39.488177 | orchestrator | 2025-06-19 10:38:39.488188 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:38:39.488199 | orchestrator | Thursday 19 June 2025 10:37:34 +0000 (0:00:00.255) 0:00:00.255 ********* 2025-06-19 10:38:39.488210 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:38:39.488222 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:38:39.488233 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:38:39.488244 | orchestrator | ok: [testbed-manager] 2025-06-19 10:38:39.488254 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:38:39.488280 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:38:39.488290 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:38:39.488301 | orchestrator | 2025-06-19 10:38:39.488311 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:38:39.488322 | orchestrator | Thursday 19 June 2025 10:37:34 +0000 (0:00:00.804) 0:00:01.059 ********* 2025-06-19 10:38:39.488332 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-19 10:38:39.488343 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-19 10:38:39.488354 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-19 10:38:39.488365 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-19 10:38:39.488375 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-19 10:38:39.488386 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-19 10:38:39.488396 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-19 10:38:39.488407 | orchestrator | 2025-06-19 10:38:39.488418 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-19 10:38:39.488434 | orchestrator | 2025-06-19 10:38:39.488452 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-19 10:38:39.488470 | orchestrator | Thursday 19 June 2025 10:37:35 +0000 (0:00:00.716) 0:00:01.776 ********* 2025-06-19 10:38:39.488489 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:38:39.488509 | orchestrator | 2025-06-19 10:38:39.488527 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-19 10:38:39.488546 | orchestrator | Thursday 19 June 2025 10:37:38 +0000 (0:00:02.449) 0:00:04.226 ********* 2025-06-19 10:38:39.488560 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-19 10:38:39.488571 | orchestrator | 2025-06-19 10:38:39.488581 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-19 10:38:39.488592 | orchestrator | Thursday 19 June 2025 10:37:49 +0000 (0:00:10.917) 0:00:15.144 ********* 2025-06-19 10:38:39.488603 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-19 10:38:39.488615 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-19 10:38:39.488626 | orchestrator | 2025-06-19 10:38:39.488637 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-19 10:38:39.488649 | orchestrator | Thursday 19 June 2025 10:37:55 +0000 (0:00:06.716) 0:00:21.860 ********* 2025-06-19 10:38:39.488659 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-19 10:38:39.488684 | orchestrator | 2025-06-19 10:38:39.488694 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-19 10:38:39.488705 | orchestrator | Thursday 19 June 2025 10:37:59 +0000 (0:00:03.530) 0:00:25.391 ********* 2025-06-19 10:38:39.488717 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-19 10:38:39.488729 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-19 10:38:39.488741 | orchestrator | 2025-06-19 10:38:39.488795 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-19 10:38:39.488808 | orchestrator | Thursday 19 June 2025 10:38:03 +0000 (0:00:04.114) 0:00:29.505 ********* 2025-06-19 10:38:39.488820 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-19 10:38:39.488832 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-19 10:38:39.488844 | orchestrator | 2025-06-19 10:38:39.488856 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-19 10:38:39.488869 | orchestrator | Thursday 19 June 2025 10:38:09 +0000 (0:00:06.117) 0:00:35.623 ********* 2025-06-19 10:38:39.488881 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-19 10:38:39.488892 | orchestrator | 2025-06-19 10:38:39.488904 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:38:39.488916 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:39.488928 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:39.488942 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:39.488954 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:39.488967 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:39.488992 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:39.489005 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:39.489016 | orchestrator | 2025-06-19 10:38:39.489028 | orchestrator | 2025-06-19 10:38:39.489040 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:38:39.489052 | orchestrator | Thursday 19 June 2025 10:38:14 +0000 (0:00:05.021) 0:00:40.645 ********* 2025-06-19 10:38:39.489064 | orchestrator | =============================================================================== 2025-06-19 10:38:39.489084 | orchestrator | service-ks-register : ceph-rgw | Creating services --------------------- 10.92s 2025-06-19 10:38:39.489095 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.72s 2025-06-19 10:38:39.489105 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.12s 2025-06-19 10:38:39.489116 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.02s 2025-06-19 10:38:39.489126 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.11s 2025-06-19 10:38:39.489137 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.53s 2025-06-19 10:38:39.489147 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 2.45s 2025-06-19 10:38:39.489158 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2025-06-19 10:38:39.489168 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-06-19 10:38:39.489179 | orchestrator | 2025-06-19 10:38:39.489189 | orchestrator | 2025-06-19 10:38:39.489200 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-19 10:38:39.489218 | orchestrator | 2025-06-19 10:38:39.489229 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-19 10:38:39.489239 | orchestrator | Thursday 19 June 2025 10:37:10 +0000 (0:00:00.276) 0:00:00.276 ********* 2025-06-19 10:38:39.489249 | orchestrator | changed: [testbed-manager] 2025-06-19 10:38:39.489260 | orchestrator | 2025-06-19 10:38:39.489271 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-19 10:38:39.489281 | orchestrator | Thursday 19 June 2025 10:37:12 +0000 (0:00:01.614) 0:00:01.890 ********* 2025-06-19 10:38:39.489292 | orchestrator | changed: [testbed-manager] 2025-06-19 10:38:39.489302 | orchestrator | 2025-06-19 10:38:39.489313 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-19 10:38:39.489324 | orchestrator | Thursday 19 June 2025 10:37:13 +0000 (0:00:00.927) 0:00:02.818 ********* 2025-06-19 10:38:39.489334 | orchestrator | changed: [testbed-manager] 2025-06-19 10:38:39.489345 | orchestrator | 2025-06-19 10:38:39.489356 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-19 10:38:39.489366 | orchestrator | Thursday 19 June 2025 10:37:14 +0000 (0:00:00.941) 0:00:03.759 ********* 2025-06-19 10:38:39.489377 | orchestrator | changed: [testbed-manager] 2025-06-19 10:38:39.489387 | orchestrator | 2025-06-19 10:38:39.489398 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-19 10:38:39.489409 | orchestrator | Thursday 19 June 2025 10:37:15 +0000 (0:00:01.400) 0:00:05.160 ********* 2025-06-19 10:38:39.489419 | orchestrator | changed: [testbed-manager] 2025-06-19 10:38:39.489430 | orchestrator | 2025-06-19 10:38:39.489440 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-19 10:38:39.489451 | orchestrator | Thursday 19 June 2025 10:37:16 +0000 (0:00:01.063) 0:00:06.223 ********* 2025-06-19 10:38:39.489461 | orchestrator | changed: [testbed-manager] 2025-06-19 10:38:39.489472 | orchestrator | 2025-06-19 10:38:39.489482 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-19 10:38:39.489493 | orchestrator | Thursday 19 June 2025 10:37:17 +0000 (0:00:01.034) 0:00:07.257 ********* 2025-06-19 10:38:39.489503 | orchestrator | changed: [testbed-manager] 2025-06-19 10:38:39.489514 | orchestrator | 2025-06-19 10:38:39.489524 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-19 10:38:39.489535 | orchestrator | Thursday 19 June 2025 10:37:20 +0000 (0:00:02.071) 0:00:09.329 ********* 2025-06-19 10:38:39.489545 | orchestrator | changed: [testbed-manager] 2025-06-19 10:38:39.489559 | orchestrator | 2025-06-19 10:38:39.489578 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-19 10:38:39.489600 | orchestrator | Thursday 19 June 2025 10:37:21 +0000 (0:00:01.312) 0:00:10.641 ********* 2025-06-19 10:38:39.489620 | orchestrator | changed: [testbed-manager] 2025-06-19 10:38:39.489641 | orchestrator | 2025-06-19 10:38:39.489662 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-19 10:38:39.489685 | orchestrator | Thursday 19 June 2025 10:38:14 +0000 (0:00:53.147) 0:01:03.789 ********* 2025-06-19 10:38:39.489705 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:38:39.489720 | orchestrator | 2025-06-19 10:38:39.489731 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-19 10:38:39.489741 | orchestrator | 2025-06-19 10:38:39.489774 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-19 10:38:39.489787 | orchestrator | Thursday 19 June 2025 10:38:14 +0000 (0:00:00.132) 0:01:03.922 ********* 2025-06-19 10:38:39.489806 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:38:39.489824 | orchestrator | 2025-06-19 10:38:39.489841 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-19 10:38:39.489860 | orchestrator | 2025-06-19 10:38:39.489879 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-19 10:38:39.489894 | orchestrator | Thursday 19 June 2025 10:38:26 +0000 (0:00:11.600) 0:01:15.522 ********* 2025-06-19 10:38:39.489904 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:38:39.489924 | orchestrator | 2025-06-19 10:38:39.489935 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-19 10:38:39.489945 | orchestrator | 2025-06-19 10:38:39.489956 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-19 10:38:39.489966 | orchestrator | Thursday 19 June 2025 10:38:37 +0000 (0:00:11.202) 0:01:26.725 ********* 2025-06-19 10:38:39.489977 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:38:39.489988 | orchestrator | 2025-06-19 10:38:39.490006 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:38:39.490075 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-19 10:38:39.490089 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:39.490106 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:39.490118 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:38:39.490129 | orchestrator | 2025-06-19 10:38:39.490139 | orchestrator | 2025-06-19 10:38:39.490150 | orchestrator | 2025-06-19 10:38:39.490161 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:38:39.490172 | orchestrator | Thursday 19 June 2025 10:38:38 +0000 (0:00:01.061) 0:01:27.786 ********* 2025-06-19 10:38:39.490183 | orchestrator | =============================================================================== 2025-06-19 10:38:39.490323 | orchestrator | Create admin user ------------------------------------------------------ 53.15s 2025-06-19 10:38:39.490334 | orchestrator | Restart ceph manager service ------------------------------------------- 23.86s 2025-06-19 10:38:39.490344 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.07s 2025-06-19 10:38:39.490355 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.61s 2025-06-19 10:38:39.490365 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.40s 2025-06-19 10:38:39.490376 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.31s 2025-06-19 10:38:39.490386 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2025-06-19 10:38:39.490397 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.03s 2025-06-19 10:38:39.490407 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.94s 2025-06-19 10:38:39.490418 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2025-06-19 10:38:39.490429 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.13s 2025-06-19 10:38:39.490446 | orchestrator | 2025-06-19 10:38:39 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:39.490457 | orchestrator | 2025-06-19 10:38:39 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:42.526591 | orchestrator | 2025-06-19 10:38:42 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:42.528277 | orchestrator | 2025-06-19 10:38:42 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:42.528605 | orchestrator | 2025-06-19 10:38:42 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:42.529343 | orchestrator | 2025-06-19 10:38:42 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:42.529365 | orchestrator | 2025-06-19 10:38:42 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:45.553230 | orchestrator | 2025-06-19 10:38:45 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:45.554261 | orchestrator | 2025-06-19 10:38:45 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:45.554680 | orchestrator | 2025-06-19 10:38:45 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:45.555277 | orchestrator | 2025-06-19 10:38:45 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:45.555303 | orchestrator | 2025-06-19 10:38:45 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:48.583383 | orchestrator | 2025-06-19 10:38:48 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:48.583878 | orchestrator | 2025-06-19 10:38:48 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:48.585003 | orchestrator | 2025-06-19 10:38:48 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:48.585896 | orchestrator | 2025-06-19 10:38:48 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:48.585918 | orchestrator | 2025-06-19 10:38:48 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:51.618537 | orchestrator | 2025-06-19 10:38:51 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:51.619493 | orchestrator | 2025-06-19 10:38:51 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:51.619557 | orchestrator | 2025-06-19 10:38:51 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:51.620283 | orchestrator | 2025-06-19 10:38:51 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:51.620959 | orchestrator | 2025-06-19 10:38:51 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:54.651075 | orchestrator | 2025-06-19 10:38:54 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:54.651193 | orchestrator | 2025-06-19 10:38:54 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:54.651482 | orchestrator | 2025-06-19 10:38:54 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:54.652085 | orchestrator | 2025-06-19 10:38:54 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:54.655369 | orchestrator | 2025-06-19 10:38:54 | INFO  | Task 093b3c9c-94ad-4f92-a3e0-e407ee74516c is in state STARTED 2025-06-19 10:38:54.655421 | orchestrator | 2025-06-19 10:38:54 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:38:57.698459 | orchestrator | 2025-06-19 10:38:57 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:38:57.698568 | orchestrator | 2025-06-19 10:38:57 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:38:57.701574 | orchestrator | 2025-06-19 10:38:57 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:38:57.702343 | orchestrator | 2025-06-19 10:38:57 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:38:57.706206 | orchestrator | 2025-06-19 10:38:57 | INFO  | Task 093b3c9c-94ad-4f92-a3e0-e407ee74516c is in state STARTED 2025-06-19 10:38:57.706248 | orchestrator | 2025-06-19 10:38:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:00.735580 | orchestrator | 2025-06-19 10:39:00 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:00.735682 | orchestrator | 2025-06-19 10:39:00 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:00.736938 | orchestrator | 2025-06-19 10:39:00 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:00.737001 | orchestrator | 2025-06-19 10:39:00 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:00.737287 | orchestrator | 2025-06-19 10:39:00 | INFO  | Task 093b3c9c-94ad-4f92-a3e0-e407ee74516c is in state STARTED 2025-06-19 10:39:00.737393 | orchestrator | 2025-06-19 10:39:00 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:03.766305 | orchestrator | 2025-06-19 10:39:03 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:03.766412 | orchestrator | 2025-06-19 10:39:03 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:03.766427 | orchestrator | 2025-06-19 10:39:03 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:03.766438 | orchestrator | 2025-06-19 10:39:03 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:03.766449 | orchestrator | 2025-06-19 10:39:03 | INFO  | Task 093b3c9c-94ad-4f92-a3e0-e407ee74516c is in state STARTED 2025-06-19 10:39:03.766461 | orchestrator | 2025-06-19 10:39:03 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:06.801998 | orchestrator | 2025-06-19 10:39:06 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:06.802161 | orchestrator | 2025-06-19 10:39:06 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:06.802654 | orchestrator | 2025-06-19 10:39:06 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:06.803612 | orchestrator | 2025-06-19 10:39:06 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:06.804325 | orchestrator | 2025-06-19 10:39:06 | INFO  | Task 093b3c9c-94ad-4f92-a3e0-e407ee74516c is in state STARTED 2025-06-19 10:39:06.804348 | orchestrator | 2025-06-19 10:39:06 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:09.822214 | orchestrator | 2025-06-19 10:39:09 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:09.822509 | orchestrator | 2025-06-19 10:39:09 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:09.823270 | orchestrator | 2025-06-19 10:39:09 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:09.824966 | orchestrator | 2025-06-19 10:39:09 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:09.825094 | orchestrator | 2025-06-19 10:39:09 | INFO  | Task 093b3c9c-94ad-4f92-a3e0-e407ee74516c is in state SUCCESS 2025-06-19 10:39:09.825122 | orchestrator | 2025-06-19 10:39:09 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:12.860363 | orchestrator | 2025-06-19 10:39:12 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:12.861416 | orchestrator | 2025-06-19 10:39:12 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:12.861811 | orchestrator | 2025-06-19 10:39:12 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:12.862440 | orchestrator | 2025-06-19 10:39:12 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:12.862461 | orchestrator | 2025-06-19 10:39:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:15.886380 | orchestrator | 2025-06-19 10:39:15 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:15.886515 | orchestrator | 2025-06-19 10:39:15 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:15.886803 | orchestrator | 2025-06-19 10:39:15 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:15.887372 | orchestrator | 2025-06-19 10:39:15 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:15.887395 | orchestrator | 2025-06-19 10:39:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:18.915831 | orchestrator | 2025-06-19 10:39:18 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:18.915930 | orchestrator | 2025-06-19 10:39:18 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:18.916232 | orchestrator | 2025-06-19 10:39:18 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:18.916793 | orchestrator | 2025-06-19 10:39:18 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:18.916819 | orchestrator | 2025-06-19 10:39:18 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:21.940535 | orchestrator | 2025-06-19 10:39:21 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:21.941530 | orchestrator | 2025-06-19 10:39:21 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:21.942282 | orchestrator | 2025-06-19 10:39:21 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:21.945309 | orchestrator | 2025-06-19 10:39:21 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:21.945340 | orchestrator | 2025-06-19 10:39:21 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:24.975063 | orchestrator | 2025-06-19 10:39:24 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:24.977042 | orchestrator | 2025-06-19 10:39:24 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:24.978570 | orchestrator | 2025-06-19 10:39:24 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:24.980021 | orchestrator | 2025-06-19 10:39:24 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:24.980120 | orchestrator | 2025-06-19 10:39:24 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:28.013021 | orchestrator | 2025-06-19 10:39:28 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:28.014758 | orchestrator | 2025-06-19 10:39:28 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:28.015402 | orchestrator | 2025-06-19 10:39:28 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:28.015856 | orchestrator | 2025-06-19 10:39:28 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:28.015878 | orchestrator | 2025-06-19 10:39:28 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:31.043075 | orchestrator | 2025-06-19 10:39:31 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:31.045011 | orchestrator | 2025-06-19 10:39:31 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:31.047912 | orchestrator | 2025-06-19 10:39:31 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:31.049275 | orchestrator | 2025-06-19 10:39:31 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:31.049401 | orchestrator | 2025-06-19 10:39:31 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:34.085247 | orchestrator | 2025-06-19 10:39:34 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:34.086770 | orchestrator | 2025-06-19 10:39:34 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:34.087711 | orchestrator | 2025-06-19 10:39:34 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:34.089265 | orchestrator | 2025-06-19 10:39:34 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:34.089294 | orchestrator | 2025-06-19 10:39:34 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:37.134611 | orchestrator | 2025-06-19 10:39:37 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:37.139518 | orchestrator | 2025-06-19 10:39:37 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:37.141374 | orchestrator | 2025-06-19 10:39:37 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:37.142446 | orchestrator | 2025-06-19 10:39:37 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:37.142612 | orchestrator | 2025-06-19 10:39:37 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:40.189393 | orchestrator | 2025-06-19 10:39:40 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:40.189499 | orchestrator | 2025-06-19 10:39:40 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:40.189515 | orchestrator | 2025-06-19 10:39:40 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:40.190239 | orchestrator | 2025-06-19 10:39:40 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:40.190271 | orchestrator | 2025-06-19 10:39:40 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:43.228951 | orchestrator | 2025-06-19 10:39:43 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:43.229055 | orchestrator | 2025-06-19 10:39:43 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:43.229317 | orchestrator | 2025-06-19 10:39:43 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:43.230383 | orchestrator | 2025-06-19 10:39:43 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:43.230407 | orchestrator | 2025-06-19 10:39:43 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:46.271583 | orchestrator | 2025-06-19 10:39:46 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:46.273411 | orchestrator | 2025-06-19 10:39:46 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:46.275747 | orchestrator | 2025-06-19 10:39:46 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:46.276789 | orchestrator | 2025-06-19 10:39:46 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:46.276822 | orchestrator | 2025-06-19 10:39:46 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:49.319151 | orchestrator | 2025-06-19 10:39:49 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:49.321092 | orchestrator | 2025-06-19 10:39:49 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:49.322906 | orchestrator | 2025-06-19 10:39:49 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:49.325111 | orchestrator | 2025-06-19 10:39:49 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:49.325171 | orchestrator | 2025-06-19 10:39:49 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:52.366416 | orchestrator | 2025-06-19 10:39:52 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:52.368382 | orchestrator | 2025-06-19 10:39:52 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:52.370724 | orchestrator | 2025-06-19 10:39:52 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:52.373181 | orchestrator | 2025-06-19 10:39:52 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:52.373842 | orchestrator | 2025-06-19 10:39:52 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:55.421085 | orchestrator | 2025-06-19 10:39:55 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:55.422881 | orchestrator | 2025-06-19 10:39:55 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:55.425309 | orchestrator | 2025-06-19 10:39:55 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:55.426545 | orchestrator | 2025-06-19 10:39:55 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:55.426574 | orchestrator | 2025-06-19 10:39:55 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:39:58.472242 | orchestrator | 2025-06-19 10:39:58 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:39:58.473111 | orchestrator | 2025-06-19 10:39:58 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:39:58.474843 | orchestrator | 2025-06-19 10:39:58 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:39:58.478434 | orchestrator | 2025-06-19 10:39:58 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:39:58.478679 | orchestrator | 2025-06-19 10:39:58 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:01.512942 | orchestrator | 2025-06-19 10:40:01 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:01.513153 | orchestrator | 2025-06-19 10:40:01 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:40:01.514003 | orchestrator | 2025-06-19 10:40:01 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:01.516097 | orchestrator | 2025-06-19 10:40:01 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:01.516189 | orchestrator | 2025-06-19 10:40:01 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:04.546070 | orchestrator | 2025-06-19 10:40:04 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:04.546876 | orchestrator | 2025-06-19 10:40:04 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state STARTED 2025-06-19 10:40:04.548180 | orchestrator | 2025-06-19 10:40:04 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:04.549824 | orchestrator | 2025-06-19 10:40:04 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:04.549902 | orchestrator | 2025-06-19 10:40:04 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:07.590512 | orchestrator | 2025-06-19 10:40:07 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:07.593177 | orchestrator | 2025-06-19 10:40:07 | INFO  | Task 7f583bf6-6482-4dd8-a69a-d4f9ebd88d1a is in state SUCCESS 2025-06-19 10:40:07.594991 | orchestrator | 2025-06-19 10:40:07.595029 | orchestrator | None 2025-06-19 10:40:07.595041 | orchestrator | 2025-06-19 10:40:07.595110 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:40:07.595209 | orchestrator | 2025-06-19 10:40:07.595225 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:40:07.595236 | orchestrator | Thursday 19 June 2025 10:37:11 +0000 (0:00:00.276) 0:00:00.276 ********* 2025-06-19 10:40:07.595248 | orchestrator | ok: [testbed-manager] 2025-06-19 10:40:07.595259 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:40:07.595270 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:40:07.595343 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:40:07.595428 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:40:07.595441 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:40:07.595452 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:40:07.595463 | orchestrator | 2025-06-19 10:40:07.595474 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:40:07.595485 | orchestrator | Thursday 19 June 2025 10:37:12 +0000 (0:00:00.733) 0:00:01.010 ********* 2025-06-19 10:40:07.595496 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-19 10:40:07.595507 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-19 10:40:07.595550 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-19 10:40:07.595572 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-19 10:40:07.595594 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-19 10:40:07.595687 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-19 10:40:07.595704 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-19 10:40:07.595738 | orchestrator | 2025-06-19 10:40:07.595752 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-19 10:40:07.595765 | orchestrator | 2025-06-19 10:40:07.595778 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-19 10:40:07.595832 | orchestrator | Thursday 19 June 2025 10:37:13 +0000 (0:00:00.671) 0:00:01.682 ********* 2025-06-19 10:40:07.595846 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:40:07.595860 | orchestrator | 2025-06-19 10:40:07.595873 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-19 10:40:07.595886 | orchestrator | Thursday 19 June 2025 10:37:14 +0000 (0:00:01.246) 0:00:02.928 ********* 2025-06-19 10:40:07.595915 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-19 10:40:07.595934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.595948 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.595973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596014 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596067 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596084 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596116 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-19 10:40:07.596131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596149 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596173 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596399 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596447 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596495 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596558 | orchestrator | 2025-06-19 10:40:07.596570 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-19 10:40:07.596590 | orchestrator | Thursday 19 June 2025 10:37:17 +0000 (0:00:03.187) 0:00:06.115 ********* 2025-06-19 10:40:07.596609 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:40:07.596651 | orchestrator | 2025-06-19 10:40:07.596672 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-19 10:40:07.596692 | orchestrator | Thursday 19 June 2025 10:37:18 +0000 (0:00:01.379) 0:00:07.495 ********* 2025-06-19 10:40:07.596717 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-19 10:40:07.596737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596881 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596919 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.596936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.596979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.596998 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.597034 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.597048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.597059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.597088 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.597100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.597112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.597130 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-19 10:40:07.597143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.597155 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.597166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.597188 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.597199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.597211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.597222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.598254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.598286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.598297 | orchestrator | 2025-06-19 10:40:07.598309 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-19 10:40:07.598321 | orchestrator | Thursday 19 June 2025 10:37:24 +0000 (0:00:05.816) 0:00:13.312 ********* 2025-06-19 10:40:07.598332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.598363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.598375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.598387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.598399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.598443 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-19 10:40:07.598456 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.598467 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.598492 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-19 10:40:07.598505 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.598517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.598528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.598568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.598581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.598592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.598610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.598652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.598664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.598676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.598696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.598716 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.598735 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:40:07.598755 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.598774 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.598838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.598854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.598875 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.598888 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.598906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.598919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.598932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.598944 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.598957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.598969 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599014 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599034 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.599046 | orchestrator | 2025-06-19 10:40:07.599057 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-19 10:40:07.599068 | orchestrator | Thursday 19 June 2025 10:37:26 +0000 (0:00:01.488) 0:00:14.800 ********* 2025-06-19 10:40:07.599079 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-19 10:40:07.599096 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.599108 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599119 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-19 10:40:07.599131 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.599179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.599193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.599204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.599220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.599244 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:40:07.599255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.599266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.599277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.599331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.599356 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.599367 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.599378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.599394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.599405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.599417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-19 10:40:07.599447 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.599488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.599501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599513 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599523 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.599534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.599551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599573 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.599584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-19 10:40:07.599602 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-19 10:40:07.599724 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.599744 | orchestrator | 2025-06-19 10:40:07.599763 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-19 10:40:07.599784 | orchestrator | Thursday 19 June 2025 10:37:28 +0000 (0:00:01.854) 0:00:16.655 ********* 2025-06-19 10:40:07.599805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.599825 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-19 10:40:07.599844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.599855 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.599866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.599885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.599927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.599939 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.599949 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.599959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.599974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.599984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.599999 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.600010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.600046 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.600058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.600068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.600078 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.600092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.600103 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-19 10:40:07.600120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.600156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.600168 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.600178 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.600188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.600202 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.600218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.600228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.600238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.600247 | orchestrator | 2025-06-19 10:40:07.600257 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-19 10:40:07.600267 | orchestrator | Thursday 19 June 2025 10:37:34 +0000 (0:00:06.340) 0:00:22.995 ********* 2025-06-19 10:40:07.600277 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:40:07.600286 | orchestrator | 2025-06-19 10:40:07.600296 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-19 10:40:07.600332 | orchestrator | Thursday 19 June 2025 10:37:35 +0000 (0:00:01.323) 0:00:24.320 ********* 2025-06-19 10:40:07.600344 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846201, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600355 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846201, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600369 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846201, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.600385 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846201, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600395 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846201, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600405 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 846192, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600440 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846201, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600452 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846201, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600462 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 846192, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600472 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 846192, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600495 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 846192, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600506 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 846192, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600515 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 846173, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600525 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 846192, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600562 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 846173, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600574 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 846173, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600584 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 846173, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600603 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 846192, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.600633 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 846176, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600644 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 846173, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600654 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 846176, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600693 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 846176, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600704 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 846173, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600715 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 846190, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600734 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 846176, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600745 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 846190, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600755 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 846190, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600772 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 846176, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600831 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 846176, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600851 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 846178, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9615169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600877 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 846173, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.600892 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 846178, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9615169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600902 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 846190, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600912 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 846178, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9615169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600922 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 846186, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600961 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 846190, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600973 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 846186, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.600988 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 846190, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601003 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 846178, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9615169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601013 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 846194, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601023 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 846178, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9615169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601033 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 846194, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601076 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 846178, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9615169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601091 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 846186, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601114 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 846186, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601129 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 846186, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601139 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 846176, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.601149 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 846199, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601159 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 846194, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601198 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 846199, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601210 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 846186, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601225 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 846194, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601240 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 846199, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601250 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 846194, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601260 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 846213, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601270 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 846213, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601315 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 846213, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601340 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 846194, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601355 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 846199, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601379 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 846199, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601396 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 846195, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601412 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 846213, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601430 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 846195, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601562 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 846195, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601592 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 846190, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.601602 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 846199, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601643 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846177, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601656 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 846195, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601666 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 846213, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601676 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 846213, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601723 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846177, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601741 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846177, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601751 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 846183, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601765 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 846195, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601775 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 846195, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601785 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846177, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601795 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846171, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9595168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601815 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 846183, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601841 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 846183, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601859 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846177, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601877 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 846178, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9615169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.601890 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 846191, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601900 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 846183, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601910 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846177, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601959 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846171, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9595168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601971 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 846183, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601982 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846171, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9595168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.601995 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846171, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9595168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602005 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 846211, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602057 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 846183, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602070 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 846191, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602093 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846171, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9595168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602103 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 846191, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602113 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 846182, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602128 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 846191, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602138 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 846186, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602148 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846171, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9595168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602158 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 846202, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.966517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602173 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.602189 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 846211, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602199 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 846211, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602209 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 846191, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602223 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 846191, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602233 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 846211, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602243 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 846182, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602258 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 846211, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602273 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 846182, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602283 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 846182, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602293 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 846202, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.966517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602303 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.602317 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 846211, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602327 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 846182, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602337 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 846202, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.966517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602356 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.602366 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 846202, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.966517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602376 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.602392 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 846182, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602402 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 846194, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602412 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 846202, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.966517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602421 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.602435 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 846202, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.966517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-19 10:40:07.602445 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.602455 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 846199, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9655168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602470 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 846213, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602480 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 846195, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9645169, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602495 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846177, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.960517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602505 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 846183, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602515 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 846171, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9595168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602528 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 846191, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.963517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602539 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 846211, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.968517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602553 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 846182, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9625168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602563 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 846202, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.966517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-19 10:40:07.602573 | orchestrator | 2025-06-19 10:40:07.602583 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-19 10:40:07.602592 | orchestrator | Thursday 19 June 2025 10:37:57 +0000 (0:00:21.673) 0:00:45.993 ********* 2025-06-19 10:40:07.602602 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:40:07.602633 | orchestrator | 2025-06-19 10:40:07.602645 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-19 10:40:07.602655 | orchestrator | Thursday 19 June 2025 10:37:58 +0000 (0:00:00.702) 0:00:46.695 ********* 2025-06-19 10:40:07.602670 | orchestrator | [WARNING]: Skipped 2025-06-19 10:40:07.602680 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602690 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-19 10:40:07.602699 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602709 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-19 10:40:07.602718 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:40:07.602728 | orchestrator | [WARNING]: Skipped 2025-06-19 10:40:07.602737 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602747 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-19 10:40:07.602757 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602766 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-19 10:40:07.602775 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:40:07.602785 | orchestrator | [WARNING]: Skipped 2025-06-19 10:40:07.602794 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602804 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-19 10:40:07.602813 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602823 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-19 10:40:07.602832 | orchestrator | [WARNING]: Skipped 2025-06-19 10:40:07.602842 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602859 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-19 10:40:07.602873 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602891 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-19 10:40:07.602910 | orchestrator | [WARNING]: Skipped 2025-06-19 10:40:07.602919 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602929 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-19 10:40:07.602939 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602948 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-19 10:40:07.602957 | orchestrator | [WARNING]: Skipped 2025-06-19 10:40:07.602967 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.602981 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-19 10:40:07.602991 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.603000 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-19 10:40:07.603010 | orchestrator | [WARNING]: Skipped 2025-06-19 10:40:07.603019 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.603029 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-19 10:40:07.603038 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-19 10:40:07.603047 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-19 10:40:07.603057 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-19 10:40:07.603066 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-19 10:40:07.603075 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-19 10:40:07.603084 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-19 10:40:07.603094 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-19 10:40:07.603103 | orchestrator | 2025-06-19 10:40:07.603113 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-19 10:40:07.603122 | orchestrator | Thursday 19 June 2025 10:38:00 +0000 (0:00:01.895) 0:00:48.591 ********* 2025-06-19 10:40:07.603132 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-19 10:40:07.603145 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.603161 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-19 10:40:07.603171 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-19 10:40:07.603181 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.603191 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.603200 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-19 10:40:07.603210 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.603219 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-19 10:40:07.603228 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.603238 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-19 10:40:07.603247 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.603257 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-19 10:40:07.603266 | orchestrator | 2025-06-19 10:40:07.603276 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-19 10:40:07.603285 | orchestrator | Thursday 19 June 2025 10:38:12 +0000 (0:00:12.733) 0:01:01.324 ********* 2025-06-19 10:40:07.603295 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-19 10:40:07.603305 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.603314 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-19 10:40:07.603329 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.603339 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-19 10:40:07.603356 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.603365 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-19 10:40:07.603375 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.603384 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-19 10:40:07.603394 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-19 10:40:07.603403 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.603413 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.603422 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-19 10:40:07.603432 | orchestrator | 2025-06-19 10:40:07.603441 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-19 10:40:07.603450 | orchestrator | Thursday 19 June 2025 10:38:16 +0000 (0:00:03.468) 0:01:04.793 ********* 2025-06-19 10:40:07.603460 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-19 10:40:07.603470 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-19 10:40:07.603479 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.603489 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-19 10:40:07.603498 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.603507 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.603517 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-19 10:40:07.603527 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-19 10:40:07.603536 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.603550 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-19 10:40:07.603560 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.603569 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-19 10:40:07.603578 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.603680 | orchestrator | 2025-06-19 10:40:07.603690 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-19 10:40:07.603700 | orchestrator | Thursday 19 June 2025 10:38:18 +0000 (0:00:02.071) 0:01:06.864 ********* 2025-06-19 10:40:07.603709 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:40:07.603719 | orchestrator | 2025-06-19 10:40:07.603728 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-19 10:40:07.603738 | orchestrator | Thursday 19 June 2025 10:38:19 +0000 (0:00:00.729) 0:01:07.594 ********* 2025-06-19 10:40:07.603747 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:40:07.603756 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.603766 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.603775 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.603784 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.603794 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.603803 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.603812 | orchestrator | 2025-06-19 10:40:07.603822 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-19 10:40:07.603831 | orchestrator | Thursday 19 June 2025 10:38:19 +0000 (0:00:00.799) 0:01:08.394 ********* 2025-06-19 10:40:07.603841 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:40:07.603850 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.603865 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.603874 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.603884 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:40:07.603893 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:40:07.603902 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:07.603912 | orchestrator | 2025-06-19 10:40:07.603921 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-19 10:40:07.603931 | orchestrator | Thursday 19 June 2025 10:38:22 +0000 (0:00:02.896) 0:01:11.291 ********* 2025-06-19 10:40:07.603941 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-19 10:40:07.603950 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-19 10:40:07.603960 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.603969 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-19 10:40:07.603979 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-19 10:40:07.603988 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-19 10:40:07.603997 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:40:07.604007 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.604016 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.604026 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.604042 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-19 10:40:07.604052 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.604062 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-19 10:40:07.604072 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.604081 | orchestrator | 2025-06-19 10:40:07.604091 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-19 10:40:07.604100 | orchestrator | Thursday 19 June 2025 10:38:24 +0000 (0:00:01.664) 0:01:12.956 ********* 2025-06-19 10:40:07.604110 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-19 10:40:07.604120 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-19 10:40:07.604129 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.604139 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.604148 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-19 10:40:07.604158 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.604167 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-19 10:40:07.604177 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.604185 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-19 10:40:07.604193 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.604200 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-19 10:40:07.604208 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.604216 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-19 10:40:07.604224 | orchestrator | 2025-06-19 10:40:07.604232 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-19 10:40:07.604240 | orchestrator | Thursday 19 June 2025 10:38:26 +0000 (0:00:01.661) 0:01:14.617 ********* 2025-06-19 10:40:07.604248 | orchestrator | [WARNING]: Skipped 2025-06-19 10:40:07.604255 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-19 10:40:07.604268 | orchestrator | due to this access issue: 2025-06-19 10:40:07.604280 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-19 10:40:07.604288 | orchestrator | not a directory 2025-06-19 10:40:07.604296 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-19 10:40:07.604304 | orchestrator | 2025-06-19 10:40:07.604311 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-19 10:40:07.604319 | orchestrator | Thursday 19 June 2025 10:38:27 +0000 (0:00:01.010) 0:01:15.627 ********* 2025-06-19 10:40:07.604327 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:40:07.604335 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.604343 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.604351 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.604358 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.604366 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.604374 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.604382 | orchestrator | 2025-06-19 10:40:07.604389 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-19 10:40:07.604397 | orchestrator | Thursday 19 June 2025 10:38:27 +0000 (0:00:00.779) 0:01:16.407 ********* 2025-06-19 10:40:07.604405 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:40:07.604413 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:07.604421 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:07.604428 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:07.604436 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:40:07.604444 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:40:07.604451 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:40:07.604459 | orchestrator | 2025-06-19 10:40:07.604467 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-19 10:40:07.604475 | orchestrator | Thursday 19 June 2025 10:38:28 +0000 (0:00:00.676) 0:01:17.083 ********* 2025-06-19 10:40:07.604484 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-19 10:40:07.604497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.604505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.604514 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.604527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.604538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.604547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.604555 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.604563 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.604576 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.604584 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-19 10:40:07.604596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.604605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.604631 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-19 10:40:07.604641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.604649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.604662 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.604670 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.604683 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.604691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.604703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.604711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.604719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.604727 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.604739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.604747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-19 10:40:07.604760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.604768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.604779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-19 10:40:07.604787 | orchestrator | 2025-06-19 10:40:07.604795 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-19 10:40:07.604803 | orchestrator | Thursday 19 June 2025 10:38:32 +0000 (0:00:04.222) 0:01:21.306 ********* 2025-06-19 10:40:07.604811 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-19 10:40:07.604818 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:40:07.604826 | orchestrator | 2025-06-19 10:40:07.604834 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-19 10:40:07.604842 | orchestrator | Thursday 19 June 2025 10:38:34 +0000 (0:00:01.275) 0:01:22.581 ********* 2025-06-19 10:40:07.604850 | orchestrator | 2025-06-19 10:40:07.604858 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-19 10:40:07.604865 | orchestrator | Thursday 19 June 2025 10:38:34 +0000 (0:00:00.126) 0:01:22.707 ********* 2025-06-19 10:40:07.604873 | orchestrator | 2025-06-19 10:40:07.604881 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-19 10:40:07.604888 | orchestrator | Thursday 19 June 2025 10:38:34 +0000 (0:00:00.132) 0:01:22.840 ********* 2025-06-19 10:40:07.604896 | orchestrator | 2025-06-19 10:40:07.604904 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-19 10:40:07.604912 | orchestrator | Thursday 19 June 2025 10:38:34 +0000 (0:00:00.173) 0:01:23.013 ********* 2025-06-19 10:40:07.604920 | orchestrator | 2025-06-19 10:40:07.604927 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-19 10:40:07.604935 | orchestrator | Thursday 19 June 2025 10:38:34 +0000 (0:00:00.124) 0:01:23.137 ********* 2025-06-19 10:40:07.604943 | orchestrator | 2025-06-19 10:40:07.604950 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-19 10:40:07.604958 | orchestrator | Thursday 19 June 2025 10:38:34 +0000 (0:00:00.113) 0:01:23.251 ********* 2025-06-19 10:40:07.604966 | orchestrator | 2025-06-19 10:40:07.604978 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-19 10:40:07.604985 | orchestrator | Thursday 19 June 2025 10:38:34 +0000 (0:00:00.120) 0:01:23.372 ********* 2025-06-19 10:40:07.604993 | orchestrator | 2025-06-19 10:40:07.605001 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-19 10:40:07.605008 | orchestrator | Thursday 19 June 2025 10:38:34 +0000 (0:00:00.167) 0:01:23.540 ********* 2025-06-19 10:40:07.605016 | orchestrator | changed: [testbed-manager] 2025-06-19 10:40:07.605024 | orchestrator | 2025-06-19 10:40:07.605032 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-19 10:40:07.605039 | orchestrator | Thursday 19 June 2025 10:38:48 +0000 (0:00:14.008) 0:01:37.548 ********* 2025-06-19 10:40:07.605047 | orchestrator | changed: [testbed-manager] 2025-06-19 10:40:07.605055 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:40:07.605063 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:07.605074 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:40:07.605082 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:40:07.605089 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:40:07.605097 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:40:07.605105 | orchestrator | 2025-06-19 10:40:07.605113 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-19 10:40:07.605120 | orchestrator | Thursday 19 June 2025 10:39:05 +0000 (0:00:16.510) 0:01:54.060 ********* 2025-06-19 10:40:07.605128 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:40:07.605136 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:40:07.605144 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:07.605151 | orchestrator | 2025-06-19 10:40:07.605159 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-19 10:40:07.605167 | orchestrator | Thursday 19 June 2025 10:39:12 +0000 (0:00:06.677) 0:02:00.737 ********* 2025-06-19 10:40:07.605174 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:40:07.605182 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:40:07.605190 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:07.605197 | orchestrator | 2025-06-19 10:40:07.605205 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-19 10:40:07.605213 | orchestrator | Thursday 19 June 2025 10:39:19 +0000 (0:00:07.231) 0:02:07.969 ********* 2025-06-19 10:40:07.605221 | orchestrator | changed: [testbed-manager] 2025-06-19 10:40:07.605228 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:40:07.605236 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:40:07.605244 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:40:07.605252 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:07.605259 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:40:07.605267 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:40:07.605275 | orchestrator | 2025-06-19 10:40:07.605282 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-19 10:40:07.605290 | orchestrator | Thursday 19 June 2025 10:39:34 +0000 (0:00:14.711) 0:02:22.680 ********* 2025-06-19 10:40:07.605298 | orchestrator | changed: [testbed-manager] 2025-06-19 10:40:07.605306 | orchestrator | 2025-06-19 10:40:07.605313 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-19 10:40:07.605321 | orchestrator | Thursday 19 June 2025 10:39:41 +0000 (0:00:07.698) 0:02:30.379 ********* 2025-06-19 10:40:07.605329 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:40:07.605336 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:40:07.605344 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:07.605352 | orchestrator | 2025-06-19 10:40:07.605359 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-19 10:40:07.605367 | orchestrator | Thursday 19 June 2025 10:39:51 +0000 (0:00:09.894) 0:02:40.274 ********* 2025-06-19 10:40:07.605375 | orchestrator | changed: [testbed-manager] 2025-06-19 10:40:07.605383 | orchestrator | 2025-06-19 10:40:07.605394 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-19 10:40:07.605406 | orchestrator | Thursday 19 June 2025 10:40:01 +0000 (0:00:09.559) 0:02:49.833 ********* 2025-06-19 10:40:07.605414 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:40:07.605422 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:40:07.605430 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:40:07.605443 | orchestrator | 2025-06-19 10:40:07.605461 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:40:07.605479 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-19 10:40:07.605491 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-19 10:40:07.605504 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-19 10:40:07.605517 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-19 10:40:07.605528 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-19 10:40:07.605539 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-19 10:40:07.605550 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-19 10:40:07.605562 | orchestrator | 2025-06-19 10:40:07.605574 | orchestrator | 2025-06-19 10:40:07.605586 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:40:07.605598 | orchestrator | Thursday 19 June 2025 10:40:06 +0000 (0:00:05.616) 0:02:55.449 ********* 2025-06-19 10:40:07.605627 | orchestrator | =============================================================================== 2025-06-19 10:40:07.605641 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.67s 2025-06-19 10:40:07.605654 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.51s 2025-06-19 10:40:07.605666 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.71s 2025-06-19 10:40:07.605680 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.01s 2025-06-19 10:40:07.605692 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 12.73s 2025-06-19 10:40:07.605706 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.89s 2025-06-19 10:40:07.605717 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 9.56s 2025-06-19 10:40:07.605731 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.70s 2025-06-19 10:40:07.605739 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 7.23s 2025-06-19 10:40:07.605747 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.68s 2025-06-19 10:40:07.605755 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.34s 2025-06-19 10:40:07.605763 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.82s 2025-06-19 10:40:07.605771 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.62s 2025-06-19 10:40:07.605779 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.22s 2025-06-19 10:40:07.605786 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.47s 2025-06-19 10:40:07.605794 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.19s 2025-06-19 10:40:07.605802 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.90s 2025-06-19 10:40:07.605810 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.07s 2025-06-19 10:40:07.605825 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.90s 2025-06-19 10:40:07.605833 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.85s 2025-06-19 10:40:07.605841 | orchestrator | 2025-06-19 10:40:07 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:07.605849 | orchestrator | 2025-06-19 10:40:07 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:07.605857 | orchestrator | 2025-06-19 10:40:07 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:10.628733 | orchestrator | 2025-06-19 10:40:10 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:10.628972 | orchestrator | 2025-06-19 10:40:10 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:10.629733 | orchestrator | 2025-06-19 10:40:10 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:10.630481 | orchestrator | 2025-06-19 10:40:10 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:10.630507 | orchestrator | 2025-06-19 10:40:10 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:13.653981 | orchestrator | 2025-06-19 10:40:13 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:13.654226 | orchestrator | 2025-06-19 10:40:13 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:13.654781 | orchestrator | 2025-06-19 10:40:13 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:13.656086 | orchestrator | 2025-06-19 10:40:13 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:13.656109 | orchestrator | 2025-06-19 10:40:13 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:16.686219 | orchestrator | 2025-06-19 10:40:16 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:16.688502 | orchestrator | 2025-06-19 10:40:16 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:16.690215 | orchestrator | 2025-06-19 10:40:16 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:16.691496 | orchestrator | 2025-06-19 10:40:16 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:16.691770 | orchestrator | 2025-06-19 10:40:16 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:19.719477 | orchestrator | 2025-06-19 10:40:19 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:19.719628 | orchestrator | 2025-06-19 10:40:19 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:19.720317 | orchestrator | 2025-06-19 10:40:19 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:19.721204 | orchestrator | 2025-06-19 10:40:19 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:19.721236 | orchestrator | 2025-06-19 10:40:19 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:22.744732 | orchestrator | 2025-06-19 10:40:22 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:22.745202 | orchestrator | 2025-06-19 10:40:22 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:22.745870 | orchestrator | 2025-06-19 10:40:22 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:22.747661 | orchestrator | 2025-06-19 10:40:22 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:22.747715 | orchestrator | 2025-06-19 10:40:22 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:25.783163 | orchestrator | 2025-06-19 10:40:25 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:25.784735 | orchestrator | 2025-06-19 10:40:25 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:25.786304 | orchestrator | 2025-06-19 10:40:25 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:25.787173 | orchestrator | 2025-06-19 10:40:25 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:25.787201 | orchestrator | 2025-06-19 10:40:25 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:28.826837 | orchestrator | 2025-06-19 10:40:28 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:28.828256 | orchestrator | 2025-06-19 10:40:28 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:28.829567 | orchestrator | 2025-06-19 10:40:28 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:28.830982 | orchestrator | 2025-06-19 10:40:28 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:28.831009 | orchestrator | 2025-06-19 10:40:28 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:31.872248 | orchestrator | 2025-06-19 10:40:31 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:31.873253 | orchestrator | 2025-06-19 10:40:31 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:31.874939 | orchestrator | 2025-06-19 10:40:31 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:31.877279 | orchestrator | 2025-06-19 10:40:31 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:31.877327 | orchestrator | 2025-06-19 10:40:31 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:34.925566 | orchestrator | 2025-06-19 10:40:34 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state STARTED 2025-06-19 10:40:34.926564 | orchestrator | 2025-06-19 10:40:34 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:34.926906 | orchestrator | 2025-06-19 10:40:34 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:34.927616 | orchestrator | 2025-06-19 10:40:34 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:34.927641 | orchestrator | 2025-06-19 10:40:34 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:37.990904 | orchestrator | 2025-06-19 10:40:37.991433 | orchestrator | 2025-06-19 10:40:37.991468 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:40:37.991482 | orchestrator | 2025-06-19 10:40:37.991494 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:40:37.991505 | orchestrator | Thursday 19 June 2025 10:37:34 +0000 (0:00:00.252) 0:00:00.252 ********* 2025-06-19 10:40:37.991517 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:40:37.991529 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:40:37.991540 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:40:37.991552 | orchestrator | 2025-06-19 10:40:37.991564 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:40:37.991608 | orchestrator | Thursday 19 June 2025 10:37:34 +0000 (0:00:00.287) 0:00:00.540 ********* 2025-06-19 10:40:37.991628 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-19 10:40:37.991648 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-19 10:40:37.991692 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-19 10:40:37.991704 | orchestrator | 2025-06-19 10:40:37.991715 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-19 10:40:37.991726 | orchestrator | 2025-06-19 10:40:37.991737 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-19 10:40:37.991748 | orchestrator | Thursday 19 June 2025 10:37:34 +0000 (0:00:00.433) 0:00:00.974 ********* 2025-06-19 10:40:37.991758 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:40:37.991771 | orchestrator | 2025-06-19 10:40:37.991781 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-19 10:40:37.991811 | orchestrator | Thursday 19 June 2025 10:37:35 +0000 (0:00:00.574) 0:00:01.548 ********* 2025-06-19 10:40:37.991822 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-19 10:40:37.991833 | orchestrator | 2025-06-19 10:40:37.991843 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-19 10:40:37.991854 | orchestrator | Thursday 19 June 2025 10:37:52 +0000 (0:00:16.725) 0:00:18.274 ********* 2025-06-19 10:40:37.991865 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-19 10:40:37.991875 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-19 10:40:37.991886 | orchestrator | 2025-06-19 10:40:37.991897 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-19 10:40:37.991908 | orchestrator | Thursday 19 June 2025 10:37:59 +0000 (0:00:07.333) 0:00:25.608 ********* 2025-06-19 10:40:37.991961 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-19 10:40:37.992056 | orchestrator | 2025-06-19 10:40:37.992071 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-19 10:40:37.992083 | orchestrator | Thursday 19 June 2025 10:38:02 +0000 (0:00:03.265) 0:00:28.874 ********* 2025-06-19 10:40:37.992094 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-19 10:40:37.992105 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-19 10:40:37.992116 | orchestrator | 2025-06-19 10:40:37.992127 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-19 10:40:37.992138 | orchestrator | Thursday 19 June 2025 10:38:06 +0000 (0:00:03.814) 0:00:32.688 ********* 2025-06-19 10:40:37.992148 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-19 10:40:37.992159 | orchestrator | 2025-06-19 10:40:37.992170 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-19 10:40:37.992181 | orchestrator | Thursday 19 June 2025 10:38:09 +0000 (0:00:03.017) 0:00:35.705 ********* 2025-06-19 10:40:37.992192 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-19 10:40:37.992202 | orchestrator | 2025-06-19 10:40:37.992213 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-19 10:40:37.992224 | orchestrator | Thursday 19 June 2025 10:38:13 +0000 (0:00:03.911) 0:00:39.616 ********* 2025-06-19 10:40:37.992274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.992303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.992322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.992342 | orchestrator | 2025-06-19 10:40:37.992353 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-19 10:40:37.992364 | orchestrator | Thursday 19 June 2025 10:38:18 +0000 (0:00:04.828) 0:00:44.445 ********* 2025-06-19 10:40:37.992376 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:40:37.992387 | orchestrator | 2025-06-19 10:40:37.992425 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-19 10:40:37.992437 | orchestrator | Thursday 19 June 2025 10:38:19 +0000 (0:00:00.679) 0:00:45.124 ********* 2025-06-19 10:40:37.992448 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:40:37.992460 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:37.992471 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:40:37.992482 | orchestrator | 2025-06-19 10:40:37.992493 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-19 10:40:37.992504 | orchestrator | Thursday 19 June 2025 10:38:24 +0000 (0:00:05.123) 0:00:50.247 ********* 2025-06-19 10:40:37.992515 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-19 10:40:37.992526 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-19 10:40:37.992537 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-19 10:40:37.992547 | orchestrator | 2025-06-19 10:40:37.992558 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-19 10:40:37.992613 | orchestrator | Thursday 19 June 2025 10:38:26 +0000 (0:00:01.863) 0:00:52.111 ********* 2025-06-19 10:40:37.992627 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-19 10:40:37.992638 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-19 10:40:37.992648 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-19 10:40:37.992659 | orchestrator | 2025-06-19 10:40:37.992670 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-19 10:40:37.992680 | orchestrator | Thursday 19 June 2025 10:38:27 +0000 (0:00:01.100) 0:00:53.212 ********* 2025-06-19 10:40:37.992693 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:40:37.992706 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:40:37.992718 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:40:37.992729 | orchestrator | 2025-06-19 10:40:37.992741 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-19 10:40:37.992754 | orchestrator | Thursday 19 June 2025 10:38:27 +0000 (0:00:00.715) 0:00:53.927 ********* 2025-06-19 10:40:37.992766 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.992778 | orchestrator | 2025-06-19 10:40:37.992790 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-19 10:40:37.992802 | orchestrator | Thursday 19 June 2025 10:38:27 +0000 (0:00:00.126) 0:00:54.054 ********* 2025-06-19 10:40:37.992814 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.992826 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.992838 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.992849 | orchestrator | 2025-06-19 10:40:37.992862 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-19 10:40:37.992874 | orchestrator | Thursday 19 June 2025 10:38:28 +0000 (0:00:00.301) 0:00:54.356 ********* 2025-06-19 10:40:37.992887 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:40:37.992899 | orchestrator | 2025-06-19 10:40:37.992911 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-19 10:40:37.992931 | orchestrator | Thursday 19 June 2025 10:38:28 +0000 (0:00:00.496) 0:00:54.852 ********* 2025-06-19 10:40:37.992958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.992974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.992994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.993014 | orchestrator | 2025-06-19 10:40:37.993026 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-19 10:40:37.993039 | orchestrator | Thursday 19 June 2025 10:38:33 +0000 (0:00:04.621) 0:00:59.473 ********* 2025-06-19 10:40:37.993060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-19 10:40:37.993073 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.993085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-19 10:40:37.993103 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.993128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-19 10:40:37.993140 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.993151 | orchestrator | 2025-06-19 10:40:37.993161 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-19 10:40:37.993172 | orchestrator | Thursday 19 June 2025 10:38:37 +0000 (0:00:03.874) 0:01:03.348 ********* 2025-06-19 10:40:37.993183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-19 10:40:37.993201 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.993232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-19 10:40:37.993253 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.993273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-19 10:40:37.993302 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.993321 | orchestrator | 2025-06-19 10:40:37.993340 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-19 10:40:37.993360 | orchestrator | Thursday 19 June 2025 10:38:41 +0000 (0:00:03.873) 0:01:07.222 ********* 2025-06-19 10:40:37.993381 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.993402 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.993422 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.993441 | orchestrator | 2025-06-19 10:40:37.993452 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-19 10:40:37.993463 | orchestrator | Thursday 19 June 2025 10:38:44 +0000 (0:00:03.839) 0:01:11.062 ********* 2025-06-19 10:40:37.993496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.993510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.993535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.993548 | orchestrator | 2025-06-19 10:40:37.993559 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-19 10:40:37.993669 | orchestrator | Thursday 19 June 2025 10:38:49 +0000 (0:00:04.195) 0:01:15.257 ********* 2025-06-19 10:40:37.993685 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:40:37.993696 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:37.993707 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:40:37.993717 | orchestrator | 2025-06-19 10:40:37.993728 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-19 10:40:37.993739 | orchestrator | Thursday 19 June 2025 10:39:00 +0000 (0:00:10.900) 0:01:26.157 ********* 2025-06-19 10:40:37.993750 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.993760 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.993771 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.993781 | orchestrator | 2025-06-19 10:40:37.993792 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-19 10:40:37.993812 | orchestrator | Thursday 19 Ju2025-06-19 10:40:37 | INFO  | Task c149fc28-a66f-4306-b356-a5084b698984 is in state SUCCESS 2025-06-19 10:40:37.993825 | orchestrator | ne 2025 10:39:04 +0000 (0:00:04.297) 0:01:30.455 ********* 2025-06-19 10:40:37.993836 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.993847 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.993857 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.993868 | orchestrator | 2025-06-19 10:40:37.993878 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-19 10:40:37.993889 | orchestrator | Thursday 19 June 2025 10:39:09 +0000 (0:00:05.134) 0:01:35.590 ********* 2025-06-19 10:40:37.993900 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.993910 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.993921 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.993931 | orchestrator | 2025-06-19 10:40:37.993950 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-19 10:40:37.993961 | orchestrator | Thursday 19 June 2025 10:39:16 +0000 (0:00:06.524) 0:01:42.114 ********* 2025-06-19 10:40:37.993971 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.993982 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.993992 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.994003 | orchestrator | 2025-06-19 10:40:37.994014 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-19 10:40:37.994110 | orchestrator | Thursday 19 June 2025 10:39:20 +0000 (0:00:04.181) 0:01:46.296 ********* 2025-06-19 10:40:37.994239 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.994254 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.994263 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.994273 | orchestrator | 2025-06-19 10:40:37.994282 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-19 10:40:37.994292 | orchestrator | Thursday 19 June 2025 10:39:20 +0000 (0:00:00.652) 0:01:46.949 ********* 2025-06-19 10:40:37.994301 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-19 10:40:37.994311 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.994321 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-19 10:40:37.994330 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.994340 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-19 10:40:37.994349 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.994359 | orchestrator | 2025-06-19 10:40:37.994368 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-19 10:40:37.994378 | orchestrator | Thursday 19 June 2025 10:39:26 +0000 (0:00:06.058) 0:01:53.007 ********* 2025-06-19 10:40:37.994396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.994419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.994440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-19 10:40:37.994451 | orchestrator | 2025-06-19 10:40:37.994460 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-19 10:40:37.994470 | orchestrator | Thursday 19 June 2025 10:39:30 +0000 (0:00:04.057) 0:01:57.065 ********* 2025-06-19 10:40:37.994479 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:40:37.994489 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:40:37.994498 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:40:37.994507 | orchestrator | 2025-06-19 10:40:37.994517 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-19 10:40:37.994526 | orchestrator | Thursday 19 June 2025 10:39:31 +0000 (0:00:00.262) 0:01:57.327 ********* 2025-06-19 10:40:37.994536 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:37.994545 | orchestrator | 2025-06-19 10:40:37.994560 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-19 10:40:37.994588 | orchestrator | Thursday 19 June 2025 10:39:33 +0000 (0:00:01.982) 0:01:59.310 ********* 2025-06-19 10:40:37.994606 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:37.994616 | orchestrator | 2025-06-19 10:40:37.994625 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-19 10:40:37.994635 | orchestrator | Thursday 19 June 2025 10:39:35 +0000 (0:00:02.097) 0:02:01.407 ********* 2025-06-19 10:40:37.994644 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:37.994654 | orchestrator | 2025-06-19 10:40:37.994663 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-19 10:40:37.994673 | orchestrator | Thursday 19 June 2025 10:39:37 +0000 (0:00:01.965) 0:02:03.372 ********* 2025-06-19 10:40:37.994722 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:37.994798 | orchestrator | 2025-06-19 10:40:37.994809 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-19 10:40:37.994827 | orchestrator | Thursday 19 June 2025 10:40:04 +0000 (0:00:26.920) 0:02:30.293 ********* 2025-06-19 10:40:37.994837 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:37.994847 | orchestrator | 2025-06-19 10:40:37.994857 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-19 10:40:37.994866 | orchestrator | Thursday 19 June 2025 10:40:06 +0000 (0:00:02.052) 0:02:32.345 ********* 2025-06-19 10:40:37.994875 | orchestrator | 2025-06-19 10:40:37.994885 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-19 10:40:37.994895 | orchestrator | Thursday 19 June 2025 10:40:06 +0000 (0:00:00.059) 0:02:32.405 ********* 2025-06-19 10:40:37.994904 | orchestrator | 2025-06-19 10:40:37.994913 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-19 10:40:37.994923 | orchestrator | Thursday 19 June 2025 10:40:06 +0000 (0:00:00.182) 0:02:32.588 ********* 2025-06-19 10:40:37.994932 | orchestrator | 2025-06-19 10:40:37.994942 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-19 10:40:37.994951 | orchestrator | Thursday 19 June 2025 10:40:06 +0000 (0:00:00.062) 0:02:32.651 ********* 2025-06-19 10:40:37.994960 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:40:37.994970 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:40:37.994979 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:40:37.994989 | orchestrator | 2025-06-19 10:40:37.994999 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:40:37.995008 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-19 10:40:37.995020 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-19 10:40:37.995029 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-19 10:40:37.995039 | orchestrator | 2025-06-19 10:40:37.995048 | orchestrator | 2025-06-19 10:40:37.995058 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:40:37.995067 | orchestrator | Thursday 19 June 2025 10:40:37 +0000 (0:00:30.575) 0:03:03.227 ********* 2025-06-19 10:40:37.995077 | orchestrator | =============================================================================== 2025-06-19 10:40:37.995087 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.58s 2025-06-19 10:40:37.995096 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.92s 2025-06-19 10:40:37.995106 | orchestrator | service-ks-register : glance | Creating services ----------------------- 16.73s 2025-06-19 10:40:37.995115 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 10.90s 2025-06-19 10:40:37.995124 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.33s 2025-06-19 10:40:37.995134 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.52s 2025-06-19 10:40:37.995143 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.06s 2025-06-19 10:40:37.995160 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.13s 2025-06-19 10:40:37.995170 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.12s 2025-06-19 10:40:37.995179 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.83s 2025-06-19 10:40:37.995189 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.62s 2025-06-19 10:40:37.995198 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.30s 2025-06-19 10:40:37.995207 | orchestrator | glance : Copying over config.json files for services -------------------- 4.20s 2025-06-19 10:40:37.995217 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.18s 2025-06-19 10:40:37.995226 | orchestrator | glance : Check glance containers ---------------------------------------- 4.06s 2025-06-19 10:40:37.995235 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.91s 2025-06-19 10:40:37.995245 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.87s 2025-06-19 10:40:37.995254 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.87s 2025-06-19 10:40:37.995264 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.84s 2025-06-19 10:40:37.995273 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.81s 2025-06-19 10:40:37.995283 | orchestrator | 2025-06-19 10:40:37 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:37.995298 | orchestrator | 2025-06-19 10:40:37 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:37.995308 | orchestrator | 2025-06-19 10:40:37 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:37.995317 | orchestrator | 2025-06-19 10:40:37 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:41.055138 | orchestrator | 2025-06-19 10:40:41 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:41.057282 | orchestrator | 2025-06-19 10:40:41 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:40:41.059466 | orchestrator | 2025-06-19 10:40:41 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:41.060892 | orchestrator | 2025-06-19 10:40:41 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:41.061172 | orchestrator | 2025-06-19 10:40:41 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:44.123490 | orchestrator | 2025-06-19 10:40:44 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:44.127961 | orchestrator | 2025-06-19 10:40:44 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:40:44.133394 | orchestrator | 2025-06-19 10:40:44 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:44.138411 | orchestrator | 2025-06-19 10:40:44 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:44.138768 | orchestrator | 2025-06-19 10:40:44 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:47.189863 | orchestrator | 2025-06-19 10:40:47 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:47.191304 | orchestrator | 2025-06-19 10:40:47 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:40:47.191976 | orchestrator | 2025-06-19 10:40:47 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:47.193701 | orchestrator | 2025-06-19 10:40:47 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:47.193721 | orchestrator | 2025-06-19 10:40:47 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:50.236082 | orchestrator | 2025-06-19 10:40:50 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:50.236615 | orchestrator | 2025-06-19 10:40:50 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:40:50.237435 | orchestrator | 2025-06-19 10:40:50 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:50.240171 | orchestrator | 2025-06-19 10:40:50 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:50.240258 | orchestrator | 2025-06-19 10:40:50 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:53.278749 | orchestrator | 2025-06-19 10:40:53 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:53.278857 | orchestrator | 2025-06-19 10:40:53 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:40:53.280224 | orchestrator | 2025-06-19 10:40:53 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:53.281174 | orchestrator | 2025-06-19 10:40:53 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:53.281198 | orchestrator | 2025-06-19 10:40:53 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:56.314363 | orchestrator | 2025-06-19 10:40:56 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:56.315263 | orchestrator | 2025-06-19 10:40:56 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:40:56.316411 | orchestrator | 2025-06-19 10:40:56 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:56.317147 | orchestrator | 2025-06-19 10:40:56 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:56.317175 | orchestrator | 2025-06-19 10:40:56 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:40:59.376121 | orchestrator | 2025-06-19 10:40:59 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:40:59.376238 | orchestrator | 2025-06-19 10:40:59 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:40:59.377178 | orchestrator | 2025-06-19 10:40:59 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:40:59.378005 | orchestrator | 2025-06-19 10:40:59 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:40:59.378076 | orchestrator | 2025-06-19 10:40:59 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:02.412368 | orchestrator | 2025-06-19 10:41:02 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:02.412782 | orchestrator | 2025-06-19 10:41:02 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:02.413533 | orchestrator | 2025-06-19 10:41:02 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:02.414235 | orchestrator | 2025-06-19 10:41:02 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:02.414267 | orchestrator | 2025-06-19 10:41:02 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:05.456031 | orchestrator | 2025-06-19 10:41:05 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:05.457639 | orchestrator | 2025-06-19 10:41:05 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:05.458322 | orchestrator | 2025-06-19 10:41:05 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:05.459076 | orchestrator | 2025-06-19 10:41:05 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:05.459216 | orchestrator | 2025-06-19 10:41:05 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:08.481893 | orchestrator | 2025-06-19 10:41:08 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:08.482007 | orchestrator | 2025-06-19 10:41:08 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:08.482376 | orchestrator | 2025-06-19 10:41:08 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:08.482975 | orchestrator | 2025-06-19 10:41:08 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:08.482998 | orchestrator | 2025-06-19 10:41:08 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:11.510873 | orchestrator | 2025-06-19 10:41:11 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:11.512715 | orchestrator | 2025-06-19 10:41:11 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:11.516955 | orchestrator | 2025-06-19 10:41:11 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:11.519681 | orchestrator | 2025-06-19 10:41:11 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:11.519705 | orchestrator | 2025-06-19 10:41:11 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:14.564276 | orchestrator | 2025-06-19 10:41:14 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:14.567294 | orchestrator | 2025-06-19 10:41:14 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:14.569943 | orchestrator | 2025-06-19 10:41:14 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:14.572484 | orchestrator | 2025-06-19 10:41:14 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:14.572512 | orchestrator | 2025-06-19 10:41:14 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:17.618229 | orchestrator | 2025-06-19 10:41:17 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:17.619181 | orchestrator | 2025-06-19 10:41:17 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:17.619632 | orchestrator | 2025-06-19 10:41:17 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:17.620378 | orchestrator | 2025-06-19 10:41:17 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:17.620400 | orchestrator | 2025-06-19 10:41:17 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:20.659985 | orchestrator | 2025-06-19 10:41:20 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:20.661042 | orchestrator | 2025-06-19 10:41:20 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:20.661439 | orchestrator | 2025-06-19 10:41:20 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:20.663708 | orchestrator | 2025-06-19 10:41:20 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:20.663748 | orchestrator | 2025-06-19 10:41:20 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:23.696653 | orchestrator | 2025-06-19 10:41:23 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:23.699915 | orchestrator | 2025-06-19 10:41:23 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:23.703860 | orchestrator | 2025-06-19 10:41:23 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:23.704771 | orchestrator | 2025-06-19 10:41:23 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:23.705011 | orchestrator | 2025-06-19 10:41:23 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:26.738470 | orchestrator | 2025-06-19 10:41:26 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:26.740243 | orchestrator | 2025-06-19 10:41:26 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:26.740277 | orchestrator | 2025-06-19 10:41:26 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:26.740289 | orchestrator | 2025-06-19 10:41:26 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:26.740301 | orchestrator | 2025-06-19 10:41:26 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:29.800292 | orchestrator | 2025-06-19 10:41:29 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:29.802095 | orchestrator | 2025-06-19 10:41:29 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:29.804360 | orchestrator | 2025-06-19 10:41:29 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:29.806081 | orchestrator | 2025-06-19 10:41:29 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:29.806132 | orchestrator | 2025-06-19 10:41:29 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:32.830489 | orchestrator | 2025-06-19 10:41:32 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:32.830652 | orchestrator | 2025-06-19 10:41:32 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:32.831172 | orchestrator | 2025-06-19 10:41:32 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:32.832419 | orchestrator | 2025-06-19 10:41:32 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:32.832443 | orchestrator | 2025-06-19 10:41:32 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:35.875034 | orchestrator | 2025-06-19 10:41:35 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:35.875246 | orchestrator | 2025-06-19 10:41:35 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:35.876044 | orchestrator | 2025-06-19 10:41:35 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:35.876959 | orchestrator | 2025-06-19 10:41:35 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:35.876980 | orchestrator | 2025-06-19 10:41:35 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:38.904845 | orchestrator | 2025-06-19 10:41:38 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:38.904956 | orchestrator | 2025-06-19 10:41:38 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:38.905784 | orchestrator | 2025-06-19 10:41:38 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:38.906356 | orchestrator | 2025-06-19 10:41:38 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:38.906386 | orchestrator | 2025-06-19 10:41:38 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:41.935123 | orchestrator | 2025-06-19 10:41:41 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:41.936007 | orchestrator | 2025-06-19 10:41:41 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:41.936472 | orchestrator | 2025-06-19 10:41:41 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:41.937285 | orchestrator | 2025-06-19 10:41:41 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:41.937334 | orchestrator | 2025-06-19 10:41:41 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:44.969434 | orchestrator | 2025-06-19 10:41:44 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:44.975029 | orchestrator | 2025-06-19 10:41:44 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:44.975315 | orchestrator | 2025-06-19 10:41:44 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:44.975993 | orchestrator | 2025-06-19 10:41:44 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:44.976015 | orchestrator | 2025-06-19 10:41:44 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:48.005761 | orchestrator | 2025-06-19 10:41:48 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:48.006701 | orchestrator | 2025-06-19 10:41:48 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:48.006798 | orchestrator | 2025-06-19 10:41:48 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:48.011339 | orchestrator | 2025-06-19 10:41:48 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:48.011413 | orchestrator | 2025-06-19 10:41:48 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:51.038260 | orchestrator | 2025-06-19 10:41:51 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:51.038368 | orchestrator | 2025-06-19 10:41:51 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:51.038842 | orchestrator | 2025-06-19 10:41:51 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:51.039326 | orchestrator | 2025-06-19 10:41:51 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:51.039349 | orchestrator | 2025-06-19 10:41:51 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:54.061722 | orchestrator | 2025-06-19 10:41:54 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:54.061920 | orchestrator | 2025-06-19 10:41:54 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:54.061951 | orchestrator | 2025-06-19 10:41:54 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:54.062630 | orchestrator | 2025-06-19 10:41:54 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state STARTED 2025-06-19 10:41:54.062655 | orchestrator | 2025-06-19 10:41:54 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:41:57.087628 | orchestrator | 2025-06-19 10:41:57 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:41:57.087741 | orchestrator | 2025-06-19 10:41:57 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:41:57.088309 | orchestrator | 2025-06-19 10:41:57 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:41:57.088808 | orchestrator | 2025-06-19 10:41:57 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:41:57.090391 | orchestrator | 2025-06-19 10:41:57 | INFO  | Task 2bde2153-b5a6-4bf5-a95e-5cd3299bc16f is in state SUCCESS 2025-06-19 10:41:57.092336 | orchestrator | 2025-06-19 10:41:57.092928 | orchestrator | 2025-06-19 10:41:57.092949 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:41:57.092962 | orchestrator | 2025-06-19 10:41:57.092974 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:41:57.092985 | orchestrator | Thursday 19 June 2025 10:38:03 +0000 (0:00:00.218) 0:00:00.218 ********* 2025-06-19 10:41:57.092996 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:41:57.093008 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:41:57.093019 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:41:57.093030 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:41:57.093041 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:41:57.093051 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:41:57.093062 | orchestrator | 2025-06-19 10:41:57.093073 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:41:57.093083 | orchestrator | Thursday 19 June 2025 10:38:04 +0000 (0:00:00.567) 0:00:00.786 ********* 2025-06-19 10:41:57.093094 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-19 10:41:57.093105 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-19 10:41:57.093116 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-19 10:41:57.093126 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-19 10:41:57.093137 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-19 10:41:57.093148 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-19 10:41:57.093158 | orchestrator | 2025-06-19 10:41:57.093169 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-19 10:41:57.093179 | orchestrator | 2025-06-19 10:41:57.093205 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-19 10:41:57.093216 | orchestrator | Thursday 19 June 2025 10:38:05 +0000 (0:00:00.532) 0:00:01.319 ********* 2025-06-19 10:41:57.093228 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:41:57.093239 | orchestrator | 2025-06-19 10:41:57.093250 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-19 10:41:57.093260 | orchestrator | Thursday 19 June 2025 10:38:06 +0000 (0:00:00.954) 0:00:02.273 ********* 2025-06-19 10:41:57.093272 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-19 10:41:57.093282 | orchestrator | 2025-06-19 10:41:57.093293 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-19 10:41:57.093304 | orchestrator | Thursday 19 June 2025 10:38:09 +0000 (0:00:03.079) 0:00:05.353 ********* 2025-06-19 10:41:57.093315 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-19 10:41:57.093326 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-19 10:41:57.093337 | orchestrator | 2025-06-19 10:41:57.093347 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-19 10:41:57.093358 | orchestrator | Thursday 19 June 2025 10:38:14 +0000 (0:00:05.691) 0:00:11.044 ********* 2025-06-19 10:41:57.093369 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-19 10:41:57.093380 | orchestrator | 2025-06-19 10:41:57.093391 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-19 10:41:57.093402 | orchestrator | Thursday 19 June 2025 10:38:17 +0000 (0:00:03.179) 0:00:14.224 ********* 2025-06-19 10:41:57.093413 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-19 10:41:57.093423 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-19 10:41:57.093452 | orchestrator | 2025-06-19 10:41:57.093464 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-19 10:41:57.093498 | orchestrator | Thursday 19 June 2025 10:38:21 +0000 (0:00:03.857) 0:00:18.083 ********* 2025-06-19 10:41:57.093511 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-19 10:41:57.093523 | orchestrator | 2025-06-19 10:41:57.093536 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-19 10:41:57.093548 | orchestrator | Thursday 19 June 2025 10:38:25 +0000 (0:00:03.502) 0:00:21.585 ********* 2025-06-19 10:41:57.093561 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-19 10:41:57.093573 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-19 10:41:57.093585 | orchestrator | 2025-06-19 10:41:57.093597 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-19 10:41:57.093610 | orchestrator | Thursday 19 June 2025 10:38:33 +0000 (0:00:08.098) 0:00:29.684 ********* 2025-06-19 10:41:57.093626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.093698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.093721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.093736 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.093758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.093772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.093819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.093839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.093851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.093863 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.093881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.093893 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.093905 | orchestrator | 2025-06-19 10:41:57.093944 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-19 10:41:57.093957 | orchestrator | Thursday 19 June 2025 10:38:36 +0000 (0:00:02.765) 0:00:32.449 ********* 2025-06-19 10:41:57.093968 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.093979 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:41:57.093990 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:41:57.094000 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:41:57.094011 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:41:57.094073 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:41:57.094084 | orchestrator | 2025-06-19 10:41:57.094095 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-19 10:41:57.094106 | orchestrator | Thursday 19 June 2025 10:38:36 +0000 (0:00:00.755) 0:00:33.205 ********* 2025-06-19 10:41:57.094116 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.094127 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:41:57.094138 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:41:57.094149 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:41:57.094160 | orchestrator | 2025-06-19 10:41:57.094170 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-19 10:41:57.094181 | orchestrator | Thursday 19 June 2025 10:38:37 +0000 (0:00:00.986) 0:00:34.191 ********* 2025-06-19 10:41:57.094192 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-19 10:41:57.094203 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-19 10:41:57.094219 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-19 10:41:57.094230 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-19 10:41:57.094248 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-19 10:41:57.094259 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-19 10:41:57.094270 | orchestrator | 2025-06-19 10:41:57.094281 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-19 10:41:57.094291 | orchestrator | Thursday 19 June 2025 10:38:40 +0000 (0:00:02.450) 0:00:36.642 ********* 2025-06-19 10:41:57.094304 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-19 10:41:57.094317 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-19 10:41:57.094329 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-19 10:41:57.094379 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-19 10:41:57.094398 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-19 10:41:57.094416 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-19 10:41:57.094428 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-19 10:41:57.094440 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-19 10:41:57.094502 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-19 10:41:57.094528 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-19 10:41:57.094541 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-19 10:41:57.094552 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-19 10:41:57.094563 | orchestrator | 2025-06-19 10:41:57.094574 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-19 10:41:57.094585 | orchestrator | Thursday 19 June 2025 10:38:44 +0000 (0:00:03.992) 0:00:40.635 ********* 2025-06-19 10:41:57.094596 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-19 10:41:57.094607 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-19 10:41:57.094618 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-19 10:41:57.094629 | orchestrator | 2025-06-19 10:41:57.094639 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-19 10:41:57.094650 | orchestrator | Thursday 19 June 2025 10:38:46 +0000 (0:00:02.102) 0:00:42.737 ********* 2025-06-19 10:41:57.094667 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-19 10:41:57.094686 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-19 10:41:57.094707 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-19 10:41:57.094739 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-19 10:41:57.094760 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-19 10:41:57.094830 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-19 10:41:57.094852 | orchestrator | 2025-06-19 10:41:57.094871 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-19 10:41:57.094883 | orchestrator | Thursday 19 June 2025 10:38:49 +0000 (0:00:02.962) 0:00:45.700 ********* 2025-06-19 10:41:57.094904 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-19 10:41:57.094915 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-19 10:41:57.094926 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-19 10:41:57.094936 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-19 10:41:57.094947 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-19 10:41:57.094957 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-19 10:41:57.094968 | orchestrator | 2025-06-19 10:41:57.094978 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-19 10:41:57.094989 | orchestrator | Thursday 19 June 2025 10:38:51 +0000 (0:00:01.633) 0:00:47.334 ********* 2025-06-19 10:41:57.095000 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.095010 | orchestrator | 2025-06-19 10:41:57.095021 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-19 10:41:57.095031 | orchestrator | Thursday 19 June 2025 10:38:51 +0000 (0:00:00.407) 0:00:47.742 ********* 2025-06-19 10:41:57.095042 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.095053 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:41:57.095063 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:41:57.095074 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:41:57.095084 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:41:57.095107 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:41:57.095118 | orchestrator | 2025-06-19 10:41:57.095129 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-19 10:41:57.095139 | orchestrator | Thursday 19 June 2025 10:38:53 +0000 (0:00:02.256) 0:00:49.998 ********* 2025-06-19 10:41:57.095151 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:41:57.095162 | orchestrator | 2025-06-19 10:41:57.095173 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-19 10:41:57.095184 | orchestrator | Thursday 19 June 2025 10:38:56 +0000 (0:00:02.835) 0:00:52.834 ********* 2025-06-19 10:41:57.095195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.095207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.095255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.095278 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.095295 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.095307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.095318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.095329 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.095379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.095393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.095409 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.095420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.095431 | orchestrator | 2025-06-19 10:41:57.095442 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-19 10:41:57.095453 | orchestrator | Thursday 19 June 2025 10:39:00 +0000 (0:00:04.068) 0:00:56.902 ********* 2025-06-19 10:41:57.095498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:41:57.095549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:41:57.095580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:41:57.095603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095620 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.095632 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:41:57.095642 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:41:57.095654 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095683 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:41:57.095699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095722 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:41:57.095733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095761 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:41:57.095772 | orchestrator | 2025-06-19 10:41:57.095783 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-19 10:41:57.095794 | orchestrator | Thursday 19 June 2025 10:39:02 +0000 (0:00:01.778) 0:00:58.680 ********* 2025-06-19 10:41:57.095813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:41:57.095830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095841 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.095853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:41:57.095864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095881 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:41:57.095892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:41:57.095911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095923 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:41:57.095939 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095950 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095961 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:41:57.095972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.095990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.096001 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:41:57.096018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.096030 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.096041 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:41:57.096051 | orchestrator | 2025-06-19 10:41:57.096062 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-19 10:41:57.096083 | orchestrator | Thursday 19 June 2025 10:39:04 +0000 (0:00:01.724) 0:01:00.404 ********* 2025-06-19 10:41:57.096102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.096142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.096167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.096201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096357 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096404 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096435 | orchestrator | 2025-06-19 10:41:57.096453 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-19 10:41:57.096502 | orchestrator | Thursday 19 June 2025 10:39:07 +0000 (0:00:03.627) 0:01:04.032 ********* 2025-06-19 10:41:57.096523 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-19 10:41:57.096541 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:41:57.096560 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-19 10:41:57.096579 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:41:57.096597 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-19 10:41:57.096615 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:41:57.096633 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-19 10:41:57.096651 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-19 10:41:57.096670 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-19 10:41:57.096689 | orchestrator | 2025-06-19 10:41:57.096707 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-19 10:41:57.096725 | orchestrator | Thursday 19 June 2025 10:39:09 +0000 (0:00:02.096) 0:01:06.129 ********* 2025-06-19 10:41:57.096743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.096773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.096801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.096852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096954 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.096975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.097004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.097025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.097044 | orchestrator | 2025-06-19 10:41:57.097062 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-19 10:41:57.097090 | orchestrator | Thursday 19 June 2025 10:39:20 +0000 (0:00:10.400) 0:01:16.529 ********* 2025-06-19 10:41:57.097121 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.097140 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:41:57.097158 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:41:57.097176 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:41:57.097193 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:41:57.097211 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:41:57.097229 | orchestrator | 2025-06-19 10:41:57.097248 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-19 10:41:57.097267 | orchestrator | Thursday 19 June 2025 10:39:24 +0000 (0:00:03.747) 0:01:20.276 ********* 2025-06-19 10:41:57.097317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:41:57.097348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.097380 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.097405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:41:57.097422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.097441 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:41:57.097544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-19 10:41:57.097585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.097604 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:41:57.097646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.097675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.097694 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:41:57.097712 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.097744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.097773 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:41:57.097818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.097851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-19 10:41:57.097883 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:41:57.097906 | orchestrator | 2025-06-19 10:41:57.097924 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-19 10:41:57.097939 | orchestrator | Thursday 19 June 2025 10:39:25 +0000 (0:00:01.667) 0:01:21.944 ********* 2025-06-19 10:41:57.097959 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.097985 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:41:57.098007 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:41:57.098060 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:41:57.098088 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:41:57.098115 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:41:57.098134 | orchestrator | 2025-06-19 10:41:57.098152 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-19 10:41:57.098169 | orchestrator | Thursday 19 June 2025 10:39:26 +0000 (0:00:00.780) 0:01:22.725 ********* 2025-06-19 10:41:57.098188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.098221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.098276 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-19 10:41:57.098304 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.098322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.098340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.098359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.098404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.098435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.098462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.098510 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.098526 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-19 10:41:57.098552 | orchestrator | 2025-06-19 10:41:57.098579 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-19 10:41:57.098598 | orchestrator | Thursday 19 June 2025 10:39:29 +0000 (0:00:02.900) 0:01:25.626 ********* 2025-06-19 10:41:57.098613 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.098640 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:41:57.098664 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:41:57.098683 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:41:57.098707 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:41:57.098726 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:41:57.098742 | orchestrator | 2025-06-19 10:41:57.098757 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-19 10:41:57.098773 | orchestrator | Thursday 19 June 2025 10:39:30 +0000 (0:00:00.625) 0:01:26.251 ********* 2025-06-19 10:41:57.098794 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:41:57.098822 | orchestrator | 2025-06-19 10:41:57.098841 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-19 10:41:57.098856 | orchestrator | Thursday 19 June 2025 10:39:32 +0000 (0:00:02.064) 0:01:28.316 ********* 2025-06-19 10:41:57.098872 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:41:57.098887 | orchestrator | 2025-06-19 10:41:57.098903 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-19 10:41:57.098928 | orchestrator | Thursday 19 June 2025 10:39:34 +0000 (0:00:02.027) 0:01:30.344 ********* 2025-06-19 10:41:57.098953 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:41:57.098973 | orchestrator | 2025-06-19 10:41:57.098989 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-19 10:41:57.099004 | orchestrator | Thursday 19 June 2025 10:39:53 +0000 (0:00:18.973) 0:01:49.317 ********* 2025-06-19 10:41:57.099019 | orchestrator | 2025-06-19 10:41:57.099059 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-19 10:41:57.099082 | orchestrator | Thursday 19 June 2025 10:39:53 +0000 (0:00:00.059) 0:01:49.376 ********* 2025-06-19 10:41:57.099098 | orchestrator | 2025-06-19 10:41:57.099114 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-19 10:41:57.099129 | orchestrator | Thursday 19 June 2025 10:39:53 +0000 (0:00:00.062) 0:01:49.438 ********* 2025-06-19 10:41:57.099154 | orchestrator | 2025-06-19 10:41:57.099180 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-19 10:41:57.099198 | orchestrator | Thursday 19 June 2025 10:39:53 +0000 (0:00:00.060) 0:01:49.499 ********* 2025-06-19 10:41:57.099213 | orchestrator | 2025-06-19 10:41:57.099228 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-19 10:41:57.099243 | orchestrator | Thursday 19 June 2025 10:39:53 +0000 (0:00:00.061) 0:01:49.561 ********* 2025-06-19 10:41:57.099266 | orchestrator | 2025-06-19 10:41:57.099293 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-19 10:41:57.099310 | orchestrator | Thursday 19 June 2025 10:39:53 +0000 (0:00:00.061) 0:01:49.623 ********* 2025-06-19 10:41:57.099327 | orchestrator | 2025-06-19 10:41:57.099342 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-19 10:41:57.099358 | orchestrator | Thursday 19 June 2025 10:39:53 +0000 (0:00:00.060) 0:01:49.683 ********* 2025-06-19 10:41:57.099374 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:41:57.099388 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:41:57.099404 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:41:57.099418 | orchestrator | 2025-06-19 10:41:57.099443 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-19 10:41:57.099460 | orchestrator | Thursday 19 June 2025 10:40:20 +0000 (0:00:26.959) 0:02:16.643 ********* 2025-06-19 10:41:57.099547 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:41:57.099564 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:41:57.099579 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:41:57.099594 | orchestrator | 2025-06-19 10:41:57.099610 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-19 10:41:57.099626 | orchestrator | Thursday 19 June 2025 10:40:30 +0000 (0:00:10.511) 0:02:27.154 ********* 2025-06-19 10:41:57.099641 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:41:57.099656 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:41:57.099685 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:41:57.099701 | orchestrator | 2025-06-19 10:41:57.099717 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-19 10:41:57.099732 | orchestrator | Thursday 19 June 2025 10:41:44 +0000 (0:01:13.374) 0:03:40.528 ********* 2025-06-19 10:41:57.099749 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:41:57.099763 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:41:57.099814 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:41:57.099832 | orchestrator | 2025-06-19 10:41:57.099849 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-19 10:41:57.099864 | orchestrator | Thursday 19 June 2025 10:41:52 +0000 (0:00:08.200) 0:03:48.729 ********* 2025-06-19 10:41:57.099879 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:41:57.099895 | orchestrator | 2025-06-19 10:41:57.099910 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:41:57.099926 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-19 10:41:57.099941 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-19 10:41:57.099955 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-19 10:41:57.099967 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-19 10:41:57.099980 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-19 10:41:57.099992 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-19 10:41:57.100003 | orchestrator | 2025-06-19 10:41:57.100016 | orchestrator | 2025-06-19 10:41:57.100028 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:41:57.100041 | orchestrator | Thursday 19 June 2025 10:41:53 +0000 (0:00:01.395) 0:03:50.125 ********* 2025-06-19 10:41:57.100054 | orchestrator | =============================================================================== 2025-06-19 10:41:57.100066 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 73.37s 2025-06-19 10:41:57.100079 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 26.96s 2025-06-19 10:41:57.100091 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.97s 2025-06-19 10:41:57.100104 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.51s 2025-06-19 10:41:57.100117 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.40s 2025-06-19 10:41:57.100130 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.20s 2025-06-19 10:41:57.100142 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.10s 2025-06-19 10:41:57.100155 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.69s 2025-06-19 10:41:57.100182 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.07s 2025-06-19 10:41:57.100197 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.99s 2025-06-19 10:41:57.100210 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.86s 2025-06-19 10:41:57.100222 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.75s 2025-06-19 10:41:57.100236 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.63s 2025-06-19 10:41:57.100248 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.50s 2025-06-19 10:41:57.100262 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.18s 2025-06-19 10:41:57.100279 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.08s 2025-06-19 10:41:57.100287 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.96s 2025-06-19 10:41:57.100295 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.90s 2025-06-19 10:41:57.100303 | orchestrator | cinder : include_tasks -------------------------------------------------- 2.84s 2025-06-19 10:41:57.100311 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.77s 2025-06-19 10:41:57.100319 | orchestrator | 2025-06-19 10:41:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:00.119437 | orchestrator | 2025-06-19 10:42:00 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:00.120460 | orchestrator | 2025-06-19 10:42:00 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:00.120909 | orchestrator | 2025-06-19 10:42:00 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:00.121414 | orchestrator | 2025-06-19 10:42:00 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:00.121445 | orchestrator | 2025-06-19 10:42:00 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:03.163632 | orchestrator | 2025-06-19 10:42:03 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:03.163738 | orchestrator | 2025-06-19 10:42:03 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:03.164218 | orchestrator | 2025-06-19 10:42:03 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:03.164748 | orchestrator | 2025-06-19 10:42:03 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:03.164772 | orchestrator | 2025-06-19 10:42:03 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:06.209265 | orchestrator | 2025-06-19 10:42:06 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:06.209521 | orchestrator | 2025-06-19 10:42:06 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:06.210477 | orchestrator | 2025-06-19 10:42:06 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:06.215843 | orchestrator | 2025-06-19 10:42:06 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:06.215869 | orchestrator | 2025-06-19 10:42:06 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:09.245064 | orchestrator | 2025-06-19 10:42:09 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:09.247368 | orchestrator | 2025-06-19 10:42:09 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:09.248075 | orchestrator | 2025-06-19 10:42:09 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:09.250114 | orchestrator | 2025-06-19 10:42:09 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:09.250145 | orchestrator | 2025-06-19 10:42:09 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:12.277000 | orchestrator | 2025-06-19 10:42:12 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:12.278276 | orchestrator | 2025-06-19 10:42:12 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:12.278665 | orchestrator | 2025-06-19 10:42:12 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:12.280075 | orchestrator | 2025-06-19 10:42:12 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:12.280128 | orchestrator | 2025-06-19 10:42:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:15.319310 | orchestrator | 2025-06-19 10:42:15 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:15.320645 | orchestrator | 2025-06-19 10:42:15 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:15.323492 | orchestrator | 2025-06-19 10:42:15 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:15.326109 | orchestrator | 2025-06-19 10:42:15 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:15.326346 | orchestrator | 2025-06-19 10:42:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:18.366802 | orchestrator | 2025-06-19 10:42:18 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:18.367476 | orchestrator | 2025-06-19 10:42:18 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:18.368167 | orchestrator | 2025-06-19 10:42:18 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:18.370201 | orchestrator | 2025-06-19 10:42:18 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:18.370293 | orchestrator | 2025-06-19 10:42:18 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:21.398866 | orchestrator | 2025-06-19 10:42:21 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:21.399187 | orchestrator | 2025-06-19 10:42:21 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:21.404785 | orchestrator | 2025-06-19 10:42:21 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:21.405414 | orchestrator | 2025-06-19 10:42:21 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:21.405504 | orchestrator | 2025-06-19 10:42:21 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:24.438387 | orchestrator | 2025-06-19 10:42:24 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:24.438663 | orchestrator | 2025-06-19 10:42:24 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:24.439164 | orchestrator | 2025-06-19 10:42:24 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:24.439792 | orchestrator | 2025-06-19 10:42:24 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:24.439819 | orchestrator | 2025-06-19 10:42:24 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:27.500988 | orchestrator | 2025-06-19 10:42:27 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:27.502299 | orchestrator | 2025-06-19 10:42:27 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:27.503228 | orchestrator | 2025-06-19 10:42:27 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:27.505519 | orchestrator | 2025-06-19 10:42:27 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:27.505545 | orchestrator | 2025-06-19 10:42:27 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:30.540536 | orchestrator | 2025-06-19 10:42:30 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:30.542629 | orchestrator | 2025-06-19 10:42:30 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:30.545250 | orchestrator | 2025-06-19 10:42:30 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:30.547038 | orchestrator | 2025-06-19 10:42:30 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:30.547158 | orchestrator | 2025-06-19 10:42:30 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:33.590506 | orchestrator | 2025-06-19 10:42:33 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:33.590628 | orchestrator | 2025-06-19 10:42:33 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:33.594876 | orchestrator | 2025-06-19 10:42:33 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:33.594903 | orchestrator | 2025-06-19 10:42:33 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:33.594916 | orchestrator | 2025-06-19 10:42:33 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:36.630794 | orchestrator | 2025-06-19 10:42:36 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:36.632453 | orchestrator | 2025-06-19 10:42:36 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:36.633761 | orchestrator | 2025-06-19 10:42:36 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state STARTED 2025-06-19 10:42:36.634997 | orchestrator | 2025-06-19 10:42:36 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:36.635221 | orchestrator | 2025-06-19 10:42:36 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:39.677627 | orchestrator | 2025-06-19 10:42:39 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:42:39.678550 | orchestrator | 2025-06-19 10:42:39 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:39.679779 | orchestrator | 2025-06-19 10:42:39 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:39.681495 | orchestrator | 2025-06-19 10:42:39 | INFO  | Task 64296c34-ca05-4f93-b9c9-d88a6103b401 is in state SUCCESS 2025-06-19 10:42:39.684617 | orchestrator | 2025-06-19 10:42:39.684706 | orchestrator | 2025-06-19 10:42:39.684723 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:42:39.684737 | orchestrator | 2025-06-19 10:42:39.684748 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:42:39.685070 | orchestrator | Thursday 19 June 2025 10:40:42 +0000 (0:00:00.274) 0:00:00.274 ********* 2025-06-19 10:42:39.685084 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:42:39.685097 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:42:39.685124 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:42:39.685135 | orchestrator | 2025-06-19 10:42:39.685146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:42:39.685157 | orchestrator | Thursday 19 June 2025 10:40:42 +0000 (0:00:00.406) 0:00:00.681 ********* 2025-06-19 10:42:39.685168 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-19 10:42:39.685179 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-19 10:42:39.685190 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-19 10:42:39.685201 | orchestrator | 2025-06-19 10:42:39.685212 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-19 10:42:39.685223 | orchestrator | 2025-06-19 10:42:39.685234 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-19 10:42:39.685245 | orchestrator | Thursday 19 June 2025 10:40:43 +0000 (0:00:00.488) 0:00:01.169 ********* 2025-06-19 10:42:39.685256 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:42:39.685289 | orchestrator | 2025-06-19 10:42:39.685301 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-19 10:42:39.685312 | orchestrator | Thursday 19 June 2025 10:40:43 +0000 (0:00:00.553) 0:00:01.722 ********* 2025-06-19 10:42:39.685323 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-19 10:42:39.685334 | orchestrator | 2025-06-19 10:42:39.685345 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-19 10:42:39.685375 | orchestrator | Thursday 19 June 2025 10:40:46 +0000 (0:00:03.154) 0:00:04.877 ********* 2025-06-19 10:42:39.685386 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-19 10:42:39.685397 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-19 10:42:39.685408 | orchestrator | 2025-06-19 10:42:39.685455 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-19 10:42:39.685466 | orchestrator | Thursday 19 June 2025 10:40:53 +0000 (0:00:06.576) 0:00:11.453 ********* 2025-06-19 10:42:39.685477 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-19 10:42:39.685488 | orchestrator | 2025-06-19 10:42:39.685498 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-19 10:42:39.685509 | orchestrator | Thursday 19 June 2025 10:40:56 +0000 (0:00:03.114) 0:00:14.568 ********* 2025-06-19 10:42:39.685520 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-19 10:42:39.685530 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-19 10:42:39.685541 | orchestrator | 2025-06-19 10:42:39.685552 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-19 10:42:39.685563 | orchestrator | Thursday 19 June 2025 10:41:00 +0000 (0:00:03.865) 0:00:18.434 ********* 2025-06-19 10:42:39.685574 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-19 10:42:39.685585 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-19 10:42:39.685596 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-19 10:42:39.685606 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-19 10:42:39.685617 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-19 10:42:39.685628 | orchestrator | 2025-06-19 10:42:39.685638 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-19 10:42:39.685649 | orchestrator | Thursday 19 June 2025 10:41:16 +0000 (0:00:15.622) 0:00:34.056 ********* 2025-06-19 10:42:39.685661 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-19 10:42:39.685673 | orchestrator | 2025-06-19 10:42:39.685685 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-19 10:42:39.685696 | orchestrator | Thursday 19 June 2025 10:41:20 +0000 (0:00:04.094) 0:00:38.150 ********* 2025-06-19 10:42:39.685712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.685752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.685778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.685791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.685805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.685818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.685841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.685868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.685881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.685894 | orchestrator | 2025-06-19 10:42:39.685906 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-19 10:42:39.685918 | orchestrator | Thursday 19 June 2025 10:41:22 +0000 (0:00:02.704) 0:00:40.854 ********* 2025-06-19 10:42:39.685930 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-19 10:42:39.685942 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-19 10:42:39.685953 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-19 10:42:39.685965 | orchestrator | 2025-06-19 10:42:39.685977 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-19 10:42:39.685989 | orchestrator | Thursday 19 June 2025 10:41:23 +0000 (0:00:00.884) 0:00:41.739 ********* 2025-06-19 10:42:39.686002 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:42:39.686013 | orchestrator | 2025-06-19 10:42:39.686077 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-19 10:42:39.686088 | orchestrator | Thursday 19 June 2025 10:41:23 +0000 (0:00:00.191) 0:00:41.930 ********* 2025-06-19 10:42:39.686098 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:42:39.686109 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:42:39.686120 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:42:39.686131 | orchestrator | 2025-06-19 10:42:39.686141 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-19 10:42:39.686152 | orchestrator | Thursday 19 June 2025 10:41:24 +0000 (0:00:00.965) 0:00:42.896 ********* 2025-06-19 10:42:39.686163 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:42:39.686174 | orchestrator | 2025-06-19 10:42:39.686185 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-19 10:42:39.686196 | orchestrator | Thursday 19 June 2025 10:41:25 +0000 (0:00:00.729) 0:00:43.626 ********* 2025-06-19 10:42:39.686207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.686237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.686283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.686297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686385 | orchestrator | 2025-06-19 10:42:39.686396 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-19 10:42:39.686407 | orchestrator | Thursday 19 June 2025 10:41:29 +0000 (0:00:03.336) 0:00:46.962 ********* 2025-06-19 10:42:39.686440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:42:39.686452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686482 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:42:39.686501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:42:39.686526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686549 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:42:39.686560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:42:39.686572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686602 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:42:39.686613 | orchestrator | 2025-06-19 10:42:39.686623 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-19 10:42:39.686634 | orchestrator | Thursday 19 June 2025 10:41:29 +0000 (0:00:00.789) 0:00:47.752 ********* 2025-06-19 10:42:39.686657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:42:39.686670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686692 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:42:39.686703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:42:39.686721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686743 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:42:39.686766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:42:39.686778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.686809 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:42:39.686820 | orchestrator | 2025-06-19 10:42:39.686831 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-19 10:42:39.686842 | orchestrator | Thursday 19 June 2025 10:41:30 +0000 (0:00:00.892) 0:00:48.644 ********* 2025-06-19 10:42:39.686853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.686870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.686886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.686897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.686982 | orchestrator | 2025-06-19 10:42:39.686993 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-19 10:42:39.687004 | orchestrator | Thursday 19 June 2025 10:41:34 +0000 (0:00:03.684) 0:00:52.329 ********* 2025-06-19 10:42:39.687014 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:42:39.687025 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:42:39.687035 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:42:39.687046 | orchestrator | 2025-06-19 10:42:39.687071 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-19 10:42:39.687082 | orchestrator | Thursday 19 June 2025 10:41:37 +0000 (0:00:02.946) 0:00:55.275 ********* 2025-06-19 10:42:39.687093 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:42:39.687103 | orchestrator | 2025-06-19 10:42:39.687114 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-19 10:42:39.687125 | orchestrator | Thursday 19 June 2025 10:41:38 +0000 (0:00:01.244) 0:00:56.519 ********* 2025-06-19 10:42:39.687135 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:42:39.687146 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:42:39.687163 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:42:39.687174 | orchestrator | 2025-06-19 10:42:39.687184 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-19 10:42:39.687195 | orchestrator | Thursday 19 June 2025 10:41:39 +0000 (0:00:01.087) 0:00:57.607 ********* 2025-06-19 10:42:39.687206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.687218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.687236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.687253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687327 | orchestrator | 2025-06-19 10:42:39.687338 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-19 10:42:39.687348 | orchestrator | Thursday 19 June 2025 10:41:48 +0000 (0:00:08.877) 0:01:06.485 ********* 2025-06-19 10:42:39.687370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:42:39.687389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.687400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.687411 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:42:39.687478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:42:39.687490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.687508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.687519 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:42:39.687536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-19 10:42:39.687558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.687569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:42:39.687580 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:42:39.687591 | orchestrator | 2025-06-19 10:42:39.687601 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-19 10:42:39.687612 | orchestrator | Thursday 19 June 2025 10:41:49 +0000 (0:00:00.848) 0:01:07.333 ********* 2025-06-19 10:42:39.687624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.687745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.687767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-19 10:42:39.687777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:42:39.687856 | orchestrator | 2025-06-19 10:42:39.687866 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-19 10:42:39.687875 | orchestrator | Thursday 19 June 2025 10:41:52 +0000 (0:00:02.973) 0:01:10.306 ********* 2025-06-19 10:42:39.687885 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:42:39.687895 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:42:39.687904 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:42:39.687913 | orchestrator | 2025-06-19 10:42:39.687923 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-19 10:42:39.687933 | orchestrator | Thursday 19 June 2025 10:41:53 +0000 (0:00:00.637) 0:01:10.944 ********* 2025-06-19 10:42:39.687942 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:42:39.687951 | orchestrator | 2025-06-19 10:42:39.687961 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-19 10:42:39.687970 | orchestrator | Thursday 19 June 2025 10:41:55 +0000 (0:00:02.491) 0:01:13.436 ********* 2025-06-19 10:42:39.687980 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:42:39.687989 | orchestrator | 2025-06-19 10:42:39.687999 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-19 10:42:39.688009 | orchestrator | Thursday 19 June 2025 10:41:57 +0000 (0:00:02.389) 0:01:15.826 ********* 2025-06-19 10:42:39.688018 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:42:39.688028 | orchestrator | 2025-06-19 10:42:39.688037 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-19 10:42:39.688046 | orchestrator | Thursday 19 June 2025 10:42:09 +0000 (0:00:11.844) 0:01:27.670 ********* 2025-06-19 10:42:39.688056 | orchestrator | 2025-06-19 10:42:39.688066 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-19 10:42:39.688075 | orchestrator | Thursday 19 June 2025 10:42:09 +0000 (0:00:00.123) 0:01:27.793 ********* 2025-06-19 10:42:39.688084 | orchestrator | 2025-06-19 10:42:39.688094 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-19 10:42:39.688103 | orchestrator | Thursday 19 June 2025 10:42:09 +0000 (0:00:00.121) 0:01:27.915 ********* 2025-06-19 10:42:39.688113 | orchestrator | 2025-06-19 10:42:39.688122 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-19 10:42:39.688132 | orchestrator | Thursday 19 June 2025 10:42:10 +0000 (0:00:00.142) 0:01:28.057 ********* 2025-06-19 10:42:39.688141 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:42:39.688151 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:42:39.688160 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:42:39.688170 | orchestrator | 2025-06-19 10:42:39.688179 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-19 10:42:39.688188 | orchestrator | Thursday 19 June 2025 10:42:22 +0000 (0:00:12.118) 0:01:40.176 ********* 2025-06-19 10:42:39.688198 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:42:39.688207 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:42:39.688217 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:42:39.688226 | orchestrator | 2025-06-19 10:42:39.688236 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-19 10:42:39.688245 | orchestrator | Thursday 19 June 2025 10:42:33 +0000 (0:00:10.935) 0:01:51.111 ********* 2025-06-19 10:42:39.688260 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:42:39.688270 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:42:39.688279 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:42:39.688289 | orchestrator | 2025-06-19 10:42:39.688298 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:42:39.688309 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-19 10:42:39.688320 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:42:39.688330 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:42:39.688339 | orchestrator | 2025-06-19 10:42:39.688349 | orchestrator | 2025-06-19 10:42:39.688359 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:42:39.688370 | orchestrator | Thursday 19 June 2025 10:42:38 +0000 (0:00:05.002) 0:01:56.113 ********* 2025-06-19 10:42:39.688380 | orchestrator | =============================================================================== 2025-06-19 10:42:39.688391 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.62s 2025-06-19 10:42:39.688407 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.12s 2025-06-19 10:42:39.688437 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.84s 2025-06-19 10:42:39.688447 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.94s 2025-06-19 10:42:39.688458 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 8.88s 2025-06-19 10:42:39.688473 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.58s 2025-06-19 10:42:39.688484 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.00s 2025-06-19 10:42:39.688494 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.09s 2025-06-19 10:42:39.688505 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.87s 2025-06-19 10:42:39.688515 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.68s 2025-06-19 10:42:39.688526 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.34s 2025-06-19 10:42:39.688536 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.15s 2025-06-19 10:42:39.688546 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.11s 2025-06-19 10:42:39.688556 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.97s 2025-06-19 10:42:39.688567 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.95s 2025-06-19 10:42:39.688577 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.70s 2025-06-19 10:42:39.688588 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.49s 2025-06-19 10:42:39.688598 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.39s 2025-06-19 10:42:39.688609 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.24s 2025-06-19 10:42:39.688619 | orchestrator | barbican : Copying over barbican-api-paste.ini -------------------------- 1.09s 2025-06-19 10:42:39.688630 | orchestrator | 2025-06-19 10:42:39 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:39.688641 | orchestrator | 2025-06-19 10:42:39 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:42.719260 | orchestrator | 2025-06-19 10:42:42 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:42:42.719366 | orchestrator | 2025-06-19 10:42:42 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:42.719507 | orchestrator | 2025-06-19 10:42:42 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:42.720099 | orchestrator | 2025-06-19 10:42:42 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:42.720139 | orchestrator | 2025-06-19 10:42:42 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:45.743830 | orchestrator | 2025-06-19 10:42:45 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:42:45.744181 | orchestrator | 2025-06-19 10:42:45 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:45.744768 | orchestrator | 2025-06-19 10:42:45 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:45.745800 | orchestrator | 2025-06-19 10:42:45 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:45.745822 | orchestrator | 2025-06-19 10:42:45 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:48.777965 | orchestrator | 2025-06-19 10:42:48 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:42:48.778121 | orchestrator | 2025-06-19 10:42:48 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:48.778479 | orchestrator | 2025-06-19 10:42:48 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:48.779193 | orchestrator | 2025-06-19 10:42:48 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:48.779215 | orchestrator | 2025-06-19 10:42:48 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:51.816315 | orchestrator | 2025-06-19 10:42:51 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:42:51.816469 | orchestrator | 2025-06-19 10:42:51 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:51.816486 | orchestrator | 2025-06-19 10:42:51 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:51.817201 | orchestrator | 2025-06-19 10:42:51 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:51.817229 | orchestrator | 2025-06-19 10:42:51 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:54.859265 | orchestrator | 2025-06-19 10:42:54 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:42:54.860331 | orchestrator | 2025-06-19 10:42:54 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:54.863786 | orchestrator | 2025-06-19 10:42:54 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:54.864741 | orchestrator | 2025-06-19 10:42:54 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:54.864814 | orchestrator | 2025-06-19 10:42:54 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:42:57.906601 | orchestrator | 2025-06-19 10:42:57 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:42:57.906716 | orchestrator | 2025-06-19 10:42:57 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:42:57.907736 | orchestrator | 2025-06-19 10:42:57 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:42:57.908488 | orchestrator | 2025-06-19 10:42:57 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:42:57.908512 | orchestrator | 2025-06-19 10:42:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:00.941851 | orchestrator | 2025-06-19 10:43:00 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:43:00.942800 | orchestrator | 2025-06-19 10:43:00 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:00.945448 | orchestrator | 2025-06-19 10:43:00 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:00.947259 | orchestrator | 2025-06-19 10:43:00 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:00.947284 | orchestrator | 2025-06-19 10:43:00 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:03.987944 | orchestrator | 2025-06-19 10:43:03 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:43:03.989869 | orchestrator | 2025-06-19 10:43:03 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:03.991105 | orchestrator | 2025-06-19 10:43:03 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:03.992975 | orchestrator | 2025-06-19 10:43:03 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:03.993033 | orchestrator | 2025-06-19 10:43:03 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:07.024960 | orchestrator | 2025-06-19 10:43:07 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:43:07.025313 | orchestrator | 2025-06-19 10:43:07 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:07.026185 | orchestrator | 2025-06-19 10:43:07 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:07.027094 | orchestrator | 2025-06-19 10:43:07 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:07.027328 | orchestrator | 2025-06-19 10:43:07 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:10.066099 | orchestrator | 2025-06-19 10:43:10 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:43:10.066202 | orchestrator | 2025-06-19 10:43:10 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:10.067462 | orchestrator | 2025-06-19 10:43:10 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:10.071499 | orchestrator | 2025-06-19 10:43:10 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:10.071545 | orchestrator | 2025-06-19 10:43:10 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:13.115757 | orchestrator | 2025-06-19 10:43:13 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:43:13.116612 | orchestrator | 2025-06-19 10:43:13 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:13.117564 | orchestrator | 2025-06-19 10:43:13 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:13.118641 | orchestrator | 2025-06-19 10:43:13 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:13.118668 | orchestrator | 2025-06-19 10:43:13 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:16.155877 | orchestrator | 2025-06-19 10:43:16 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:43:16.156202 | orchestrator | 2025-06-19 10:43:16 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:16.156943 | orchestrator | 2025-06-19 10:43:16 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:16.157727 | orchestrator | 2025-06-19 10:43:16 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:16.157794 | orchestrator | 2025-06-19 10:43:16 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:19.198549 | orchestrator | 2025-06-19 10:43:19 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state STARTED 2025-06-19 10:43:19.199473 | orchestrator | 2025-06-19 10:43:19 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:19.200056 | orchestrator | 2025-06-19 10:43:19 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:19.202224 | orchestrator | 2025-06-19 10:43:19 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:19.202519 | orchestrator | 2025-06-19 10:43:19 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:22.246491 | orchestrator | 2025-06-19 10:43:22 | INFO  | Task c4a8d106-8db9-45ab-b6d7-889fae1c35e6 is in state SUCCESS 2025-06-19 10:43:22.247410 | orchestrator | 2025-06-19 10:43:22 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:22.247866 | orchestrator | 2025-06-19 10:43:22 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:22.248482 | orchestrator | 2025-06-19 10:43:22 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:22.248505 | orchestrator | 2025-06-19 10:43:22 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:25.266426 | orchestrator | 2025-06-19 10:43:25 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:25.269052 | orchestrator | 2025-06-19 10:43:25 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:25.269609 | orchestrator | 2025-06-19 10:43:25 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:25.270259 | orchestrator | 2025-06-19 10:43:25 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:25.270284 | orchestrator | 2025-06-19 10:43:25 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:28.303932 | orchestrator | 2025-06-19 10:43:28 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:28.305344 | orchestrator | 2025-06-19 10:43:28 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:28.306081 | orchestrator | 2025-06-19 10:43:28 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:28.308293 | orchestrator | 2025-06-19 10:43:28 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:28.308331 | orchestrator | 2025-06-19 10:43:28 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:31.339303 | orchestrator | 2025-06-19 10:43:31 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:31.340461 | orchestrator | 2025-06-19 10:43:31 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:31.342280 | orchestrator | 2025-06-19 10:43:31 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:31.343519 | orchestrator | 2025-06-19 10:43:31 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:31.343740 | orchestrator | 2025-06-19 10:43:31 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:34.388888 | orchestrator | 2025-06-19 10:43:34 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:34.389010 | orchestrator | 2025-06-19 10:43:34 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:34.390559 | orchestrator | 2025-06-19 10:43:34 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:34.390986 | orchestrator | 2025-06-19 10:43:34 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:34.391020 | orchestrator | 2025-06-19 10:43:34 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:37.432828 | orchestrator | 2025-06-19 10:43:37 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:37.433956 | orchestrator | 2025-06-19 10:43:37 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:37.435767 | orchestrator | 2025-06-19 10:43:37 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:37.437579 | orchestrator | 2025-06-19 10:43:37 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:37.437621 | orchestrator | 2025-06-19 10:43:37 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:40.463163 | orchestrator | 2025-06-19 10:43:40 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:40.463933 | orchestrator | 2025-06-19 10:43:40 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:40.464512 | orchestrator | 2025-06-19 10:43:40 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:40.467776 | orchestrator | 2025-06-19 10:43:40 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:40.467813 | orchestrator | 2025-06-19 10:43:40 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:43.495492 | orchestrator | 2025-06-19 10:43:43 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:43.495599 | orchestrator | 2025-06-19 10:43:43 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:43.496093 | orchestrator | 2025-06-19 10:43:43 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:43.496600 | orchestrator | 2025-06-19 10:43:43 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:43.496622 | orchestrator | 2025-06-19 10:43:43 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:46.536246 | orchestrator | 2025-06-19 10:43:46 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:46.536537 | orchestrator | 2025-06-19 10:43:46 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:46.540599 | orchestrator | 2025-06-19 10:43:46 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:46.542658 | orchestrator | 2025-06-19 10:43:46 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:46.542916 | orchestrator | 2025-06-19 10:43:46 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:49.588064 | orchestrator | 2025-06-19 10:43:49 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:49.588709 | orchestrator | 2025-06-19 10:43:49 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:49.589071 | orchestrator | 2025-06-19 10:43:49 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:49.589775 | orchestrator | 2025-06-19 10:43:49 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:49.589807 | orchestrator | 2025-06-19 10:43:49 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:52.616486 | orchestrator | 2025-06-19 10:43:52 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:52.616854 | orchestrator | 2025-06-19 10:43:52 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:52.617002 | orchestrator | 2025-06-19 10:43:52 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:52.617517 | orchestrator | 2025-06-19 10:43:52 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:52.617703 | orchestrator | 2025-06-19 10:43:52 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:55.655943 | orchestrator | 2025-06-19 10:43:55 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:55.658447 | orchestrator | 2025-06-19 10:43:55 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:55.658698 | orchestrator | 2025-06-19 10:43:55 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:55.659409 | orchestrator | 2025-06-19 10:43:55 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:55.659446 | orchestrator | 2025-06-19 10:43:55 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:43:58.683132 | orchestrator | 2025-06-19 10:43:58 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:43:58.683545 | orchestrator | 2025-06-19 10:43:58 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:43:58.684560 | orchestrator | 2025-06-19 10:43:58 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:43:58.685696 | orchestrator | 2025-06-19 10:43:58 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:43:58.685764 | orchestrator | 2025-06-19 10:43:58 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:01.713223 | orchestrator | 2025-06-19 10:44:01 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:01.714612 | orchestrator | 2025-06-19 10:44:01 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:44:01.716095 | orchestrator | 2025-06-19 10:44:01 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:01.717974 | orchestrator | 2025-06-19 10:44:01 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:01.718008 | orchestrator | 2025-06-19 10:44:01 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:04.754906 | orchestrator | 2025-06-19 10:44:04 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:04.757104 | orchestrator | 2025-06-19 10:44:04 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:44:04.760240 | orchestrator | 2025-06-19 10:44:04 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:04.761701 | orchestrator | 2025-06-19 10:44:04 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:04.762091 | orchestrator | 2025-06-19 10:44:04 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:07.793785 | orchestrator | 2025-06-19 10:44:07 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:07.795062 | orchestrator | 2025-06-19 10:44:07 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:44:07.795869 | orchestrator | 2025-06-19 10:44:07 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:07.796991 | orchestrator | 2025-06-19 10:44:07 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:07.797080 | orchestrator | 2025-06-19 10:44:07 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:10.850167 | orchestrator | 2025-06-19 10:44:10 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:10.851781 | orchestrator | 2025-06-19 10:44:10 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:44:10.853767 | orchestrator | 2025-06-19 10:44:10 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:10.855096 | orchestrator | 2025-06-19 10:44:10 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:10.855122 | orchestrator | 2025-06-19 10:44:10 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:13.903470 | orchestrator | 2025-06-19 10:44:13 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:13.904107 | orchestrator | 2025-06-19 10:44:13 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state STARTED 2025-06-19 10:44:13.905045 | orchestrator | 2025-06-19 10:44:13 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:13.905772 | orchestrator | 2025-06-19 10:44:13 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:13.905796 | orchestrator | 2025-06-19 10:44:13 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:16.936167 | orchestrator | 2025-06-19 10:44:16 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:16.942381 | orchestrator | 2025-06-19 10:44:16 | INFO  | Task 72732b72-c3d9-4b2d-a47a-1ca0f5b7209f is in state SUCCESS 2025-06-19 10:44:16.943568 | orchestrator | 2025-06-19 10:44:16.943604 | orchestrator | 2025-06-19 10:44:16.943617 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-19 10:44:16.944227 | orchestrator | 2025-06-19 10:44:16.944241 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-19 10:44:16.944253 | orchestrator | Thursday 19 June 2025 10:42:42 +0000 (0:00:00.110) 0:00:00.110 ********* 2025-06-19 10:44:16.944264 | orchestrator | changed: [localhost] 2025-06-19 10:44:16.944276 | orchestrator | 2025-06-19 10:44:16.944288 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-19 10:44:16.944299 | orchestrator | Thursday 19 June 2025 10:42:43 +0000 (0:00:00.767) 0:00:00.878 ********* 2025-06-19 10:44:16.944339 | orchestrator | changed: [localhost] 2025-06-19 10:44:16.944351 | orchestrator | 2025-06-19 10:44:16.944363 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-19 10:44:16.944374 | orchestrator | Thursday 19 June 2025 10:43:14 +0000 (0:00:31.448) 0:00:32.328 ********* 2025-06-19 10:44:16.944385 | orchestrator | changed: [localhost] 2025-06-19 10:44:16.944395 | orchestrator | 2025-06-19 10:44:16.944407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:44:16.944418 | orchestrator | 2025-06-19 10:44:16.944429 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:44:16.944439 | orchestrator | Thursday 19 June 2025 10:43:19 +0000 (0:00:05.036) 0:00:37.364 ********* 2025-06-19 10:44:16.944450 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:44:16.944461 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:44:16.944472 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:44:16.944483 | orchestrator | 2025-06-19 10:44:16.944511 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:44:16.944522 | orchestrator | Thursday 19 June 2025 10:43:20 +0000 (0:00:00.911) 0:00:38.276 ********* 2025-06-19 10:44:16.944533 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-19 10:44:16.944544 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-19 10:44:16.944555 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-19 10:44:16.944589 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-19 10:44:16.944600 | orchestrator | 2025-06-19 10:44:16.944611 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-19 10:44:16.944621 | orchestrator | skipping: no hosts matched 2025-06-19 10:44:16.944633 | orchestrator | 2025-06-19 10:44:16.944644 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:44:16.944655 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:44:16.944669 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:44:16.944681 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:44:16.944692 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:44:16.944702 | orchestrator | 2025-06-19 10:44:16.944713 | orchestrator | 2025-06-19 10:44:16.944795 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:44:16.944808 | orchestrator | Thursday 19 June 2025 10:43:21 +0000 (0:00:01.063) 0:00:39.339 ********* 2025-06-19 10:44:16.944819 | orchestrator | =============================================================================== 2025-06-19 10:44:16.944830 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 31.45s 2025-06-19 10:44:16.944841 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.04s 2025-06-19 10:44:16.944851 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.06s 2025-06-19 10:44:16.944862 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.91s 2025-06-19 10:44:16.944872 | orchestrator | Ensure the destination directory exists --------------------------------- 0.77s 2025-06-19 10:44:16.944883 | orchestrator | 2025-06-19 10:44:16.944893 | orchestrator | 2025-06-19 10:44:16.944904 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:44:16.944915 | orchestrator | 2025-06-19 10:44:16.944925 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:44:16.944936 | orchestrator | Thursday 19 June 2025 10:40:12 +0000 (0:00:00.428) 0:00:00.428 ********* 2025-06-19 10:44:16.944946 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:44:16.944957 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:44:16.944968 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:44:16.944978 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:44:16.944989 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:44:16.944999 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:44:16.945009 | orchestrator | 2025-06-19 10:44:16.945020 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:44:16.945030 | orchestrator | Thursday 19 June 2025 10:40:12 +0000 (0:00:00.763) 0:00:01.192 ********* 2025-06-19 10:44:16.945041 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-19 10:44:16.945052 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-19 10:44:16.945064 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-19 10:44:16.945074 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-19 10:44:16.945085 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-19 10:44:16.945096 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-19 10:44:16.945106 | orchestrator | 2025-06-19 10:44:16.945117 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-19 10:44:16.945128 | orchestrator | 2025-06-19 10:44:16.945138 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-19 10:44:16.945149 | orchestrator | Thursday 19 June 2025 10:40:13 +0000 (0:00:00.515) 0:00:01.707 ********* 2025-06-19 10:44:16.945200 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:44:16.945223 | orchestrator | 2025-06-19 10:44:16.945234 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-19 10:44:16.945245 | orchestrator | Thursday 19 June 2025 10:40:14 +0000 (0:00:01.285) 0:00:02.993 ********* 2025-06-19 10:44:16.945255 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:44:16.945266 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:44:16.945293 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:44:16.945330 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:44:16.945342 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:44:16.945353 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:44:16.945364 | orchestrator | 2025-06-19 10:44:16.945375 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-19 10:44:16.945385 | orchestrator | Thursday 19 June 2025 10:40:15 +0000 (0:00:01.357) 0:00:04.350 ********* 2025-06-19 10:44:16.945396 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:44:16.945407 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:44:16.945417 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:44:16.945428 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:44:16.945440 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:44:16.945452 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:44:16.945464 | orchestrator | 2025-06-19 10:44:16.945476 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-19 10:44:16.945496 | orchestrator | Thursday 19 June 2025 10:40:17 +0000 (0:00:01.263) 0:00:05.613 ********* 2025-06-19 10:44:16.945509 | orchestrator | ok: [testbed-node-0] => { 2025-06-19 10:44:16.945521 | orchestrator |  "changed": false, 2025-06-19 10:44:16.945533 | orchestrator |  "msg": "All assertions passed" 2025-06-19 10:44:16.945545 | orchestrator | } 2025-06-19 10:44:16.945557 | orchestrator | ok: [testbed-node-1] => { 2025-06-19 10:44:16.945569 | orchestrator |  "changed": false, 2025-06-19 10:44:16.945581 | orchestrator |  "msg": "All assertions passed" 2025-06-19 10:44:16.945593 | orchestrator | } 2025-06-19 10:44:16.945605 | orchestrator | ok: [testbed-node-2] => { 2025-06-19 10:44:16.945616 | orchestrator |  "changed": false, 2025-06-19 10:44:16.945628 | orchestrator |  "msg": "All assertions passed" 2025-06-19 10:44:16.945640 | orchestrator | } 2025-06-19 10:44:16.945652 | orchestrator | ok: [testbed-node-3] => { 2025-06-19 10:44:16.945663 | orchestrator |  "changed": false, 2025-06-19 10:44:16.945676 | orchestrator |  "msg": "All assertions passed" 2025-06-19 10:44:16.945688 | orchestrator | } 2025-06-19 10:44:16.945700 | orchestrator | ok: [testbed-node-4] => { 2025-06-19 10:44:16.945712 | orchestrator |  "changed": false, 2025-06-19 10:44:16.945724 | orchestrator |  "msg": "All assertions passed" 2025-06-19 10:44:16.945735 | orchestrator | } 2025-06-19 10:44:16.945747 | orchestrator | ok: [testbed-node-5] => { 2025-06-19 10:44:16.945759 | orchestrator |  "changed": false, 2025-06-19 10:44:16.945771 | orchestrator |  "msg": "All assertions passed" 2025-06-19 10:44:16.945784 | orchestrator | } 2025-06-19 10:44:16.945796 | orchestrator | 2025-06-19 10:44:16.945806 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-19 10:44:16.945817 | orchestrator | Thursday 19 June 2025 10:40:17 +0000 (0:00:00.760) 0:00:06.374 ********* 2025-06-19 10:44:16.945828 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.945838 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.945849 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.945859 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.945870 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.945880 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.945891 | orchestrator | 2025-06-19 10:44:16.945901 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-19 10:44:16.945912 | orchestrator | Thursday 19 June 2025 10:40:18 +0000 (0:00:00.682) 0:00:07.056 ********* 2025-06-19 10:44:16.945923 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-19 10:44:16.945933 | orchestrator | 2025-06-19 10:44:16.945951 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-19 10:44:16.945962 | orchestrator | Thursday 19 June 2025 10:40:22 +0000 (0:00:03.621) 0:00:10.678 ********* 2025-06-19 10:44:16.945972 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-19 10:44:16.945983 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-19 10:44:16.945994 | orchestrator | 2025-06-19 10:44:16.946005 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-19 10:44:16.946066 | orchestrator | Thursday 19 June 2025 10:40:29 +0000 (0:00:06.870) 0:00:17.548 ********* 2025-06-19 10:44:16.946082 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-19 10:44:16.946093 | orchestrator | 2025-06-19 10:44:16.946103 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-19 10:44:16.946114 | orchestrator | Thursday 19 June 2025 10:40:32 +0000 (0:00:03.320) 0:00:20.869 ********* 2025-06-19 10:44:16.946125 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-19 10:44:16.946136 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-19 10:44:16.946146 | orchestrator | 2025-06-19 10:44:16.946157 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-19 10:44:16.946168 | orchestrator | Thursday 19 June 2025 10:40:36 +0000 (0:00:03.952) 0:00:24.822 ********* 2025-06-19 10:44:16.946261 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-19 10:44:16.946277 | orchestrator | 2025-06-19 10:44:16.946288 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-19 10:44:16.946298 | orchestrator | Thursday 19 June 2025 10:40:40 +0000 (0:00:03.594) 0:00:28.416 ********* 2025-06-19 10:44:16.946378 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-19 10:44:16.946390 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-19 10:44:16.946401 | orchestrator | 2025-06-19 10:44:16.946412 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-19 10:44:16.946423 | orchestrator | Thursday 19 June 2025 10:40:47 +0000 (0:00:07.698) 0:00:36.114 ********* 2025-06-19 10:44:16.946433 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.946444 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.946494 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.946507 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.946518 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.946529 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.946539 | orchestrator | 2025-06-19 10:44:16.946550 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-19 10:44:16.946561 | orchestrator | Thursday 19 June 2025 10:40:48 +0000 (0:00:00.726) 0:00:36.841 ********* 2025-06-19 10:44:16.946572 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.946583 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.946593 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.946603 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.946612 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.946622 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.946631 | orchestrator | 2025-06-19 10:44:16.946641 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-19 10:44:16.946650 | orchestrator | Thursday 19 June 2025 10:40:50 +0000 (0:00:02.187) 0:00:39.029 ********* 2025-06-19 10:44:16.946660 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:44:16.946670 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:44:16.946679 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:44:16.946689 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:44:16.946699 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:44:16.946708 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:44:16.946718 | orchestrator | 2025-06-19 10:44:16.946727 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-19 10:44:16.946752 | orchestrator | Thursday 19 June 2025 10:40:51 +0000 (0:00:01.289) 0:00:40.318 ********* 2025-06-19 10:44:16.946762 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.946772 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.946781 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.946791 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.946800 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.946810 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.946819 | orchestrator | 2025-06-19 10:44:16.946829 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-19 10:44:16.946838 | orchestrator | Thursday 19 June 2025 10:40:53 +0000 (0:00:01.998) 0:00:42.317 ********* 2025-06-19 10:44:16.946851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.946866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.946877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.946916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.946947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.946958 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.946968 | orchestrator | 2025-06-19 10:44:16.946978 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-19 10:44:16.946988 | orchestrator | Thursday 19 June 2025 10:40:56 +0000 (0:00:02.926) 0:00:45.243 ********* 2025-06-19 10:44:16.946998 | orchestrator | [WARNING]: Skipped 2025-06-19 10:44:16.947008 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-19 10:44:16.947017 | orchestrator | due to this access issue: 2025-06-19 10:44:16.947027 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-19 10:44:16.947037 | orchestrator | a directory 2025-06-19 10:44:16.947046 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:44:16.947056 | orchestrator | 2025-06-19 10:44:16.947065 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-19 10:44:16.947075 | orchestrator | Thursday 19 June 2025 10:40:57 +0000 (0:00:00.818) 0:00:46.062 ********* 2025-06-19 10:44:16.947084 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:44:16.947095 | orchestrator | 2025-06-19 10:44:16.947104 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-19 10:44:16.947114 | orchestrator | Thursday 19 June 2025 10:40:58 +0000 (0:00:01.198) 0:00:47.260 ********* 2025-06-19 10:44:16.947124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.947161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.947184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.947194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.947205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.947215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.947230 | orchestrator | 2025-06-19 10:44:16.947240 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-19 10:44:16.947275 | orchestrator | Thursday 19 June 2025 10:41:02 +0000 (0:00:03.140) 0:00:50.401 ********* 2025-06-19 10:44:16.947287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.947297 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.947332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.947343 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.947353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.947362 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.947372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.947382 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.947427 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.947439 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.947453 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.947463 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.947473 | orchestrator | 2025-06-19 10:44:16.947482 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-19 10:44:16.947492 | orchestrator | Thursday 19 June 2025 10:41:04 +0000 (0:00:02.444) 0:00:52.845 ********* 2025-06-19 10:44:16.947502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.947512 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.947521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.947531 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.947541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.947557 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.947593 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.947604 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.947618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.947628 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.947638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.947648 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.947657 | orchestrator | 2025-06-19 10:44:16.947667 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-19 10:44:16.947676 | orchestrator | Thursday 19 June 2025 10:41:07 +0000 (0:00:02.828) 0:00:55.673 ********* 2025-06-19 10:44:16.947685 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.947695 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.947704 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.947713 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.947723 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.947732 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.947741 | orchestrator | 2025-06-19 10:44:16.947751 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-19 10:44:16.947766 | orchestrator | Thursday 19 June 2025 10:41:09 +0000 (0:00:01.748) 0:00:57.421 ********* 2025-06-19 10:44:16.947776 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.947785 | orchestrator | 2025-06-19 10:44:16.947795 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-19 10:44:16.947804 | orchestrator | Thursday 19 June 2025 10:41:09 +0000 (0:00:00.125) 0:00:57.547 ********* 2025-06-19 10:44:16.947813 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.947823 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.947832 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.947841 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.947851 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.947860 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.947869 | orchestrator | 2025-06-19 10:44:16.947879 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-19 10:44:16.947888 | orchestrator | Thursday 19 June 2025 10:41:09 +0000 (0:00:00.594) 0:00:58.141 ********* 2025-06-19 10:44:16.947905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.947915 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.947930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.947940 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.947949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.947959 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.947975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.947985 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.947995 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.948004 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.948024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.948034 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.948043 | orchestrator | 2025-06-19 10:44:16.948053 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-19 10:44:16.948062 | orchestrator | Thursday 19 June 2025 10:41:11 +0000 (0:00:01.803) 0:00:59.944 ********* 2025-06-19 10:44:16.948076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948114 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.948130 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.948147 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.948157 | orchestrator | 2025-06-19 10:44:16.948167 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-19 10:44:16.948176 | orchestrator | Thursday 19 June 2025 10:41:14 +0000 (0:00:03.018) 0:01:02.963 ********* 2025-06-19 10:44:16.948186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948218 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.948228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948243 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.948259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.948269 | orchestrator | 2025-06-19 10:44:16.948278 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-19 10:44:16.948288 | orchestrator | Thursday 19 June 2025 10:41:19 +0000 (0:00:05.228) 0:01:08.192 ********* 2025-06-19 10:44:16.948297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.948324 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.948335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.948350 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.948360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.948370 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.948384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948420 | orchestrator | 2025-06-19 10:44:16.948430 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-19 10:44:16.948439 | orchestrator | Thursday 19 June 2025 10:41:23 +0000 (0:00:03.680) 0:01:11.872 ********* 2025-06-19 10:44:16.948449 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.948458 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.948468 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.948477 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:44:16.948486 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:16.948496 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:44:16.948505 | orchestrator | 2025-06-19 10:44:16.948514 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-19 10:44:16.948524 | orchestrator | Thursday 19 June 2025 10:41:26 +0000 (0:00:02.671) 0:01:14.544 ********* 2025-06-19 10:44:16.948539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.948550 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.948569 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.948579 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.948589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.948599 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.948608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.948651 | orchestrator | 2025-06-19 10:44:16.948660 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-19 10:44:16.948674 | orchestrator | Thursday 19 June 2025 10:41:30 +0000 (0:00:03.871) 0:01:18.416 ********* 2025-06-19 10:44:16.948684 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.948693 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.948702 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.948712 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.948721 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.948730 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.948740 | orchestrator | 2025-06-19 10:44:16.948749 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-19 10:44:16.948758 | orchestrator | Thursday 19 June 2025 10:41:32 +0000 (0:00:02.178) 0:01:20.595 ********* 2025-06-19 10:44:16.948768 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.948777 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.948786 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.948796 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.948805 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.948814 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.948823 | orchestrator | 2025-06-19 10:44:16.948833 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-19 10:44:16.948842 | orchestrator | Thursday 19 June 2025 10:41:34 +0000 (0:00:02.396) 0:01:22.991 ********* 2025-06-19 10:44:16.948852 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.948861 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.948870 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.948879 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.948889 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.948898 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.948907 | orchestrator | 2025-06-19 10:44:16.948917 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-19 10:44:16.948926 | orchestrator | Thursday 19 June 2025 10:41:37 +0000 (0:00:02.959) 0:01:25.951 ********* 2025-06-19 10:44:16.948936 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.948945 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.948955 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.948964 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.948973 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.948982 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.948992 | orchestrator | 2025-06-19 10:44:16.949001 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-19 10:44:16.949011 | orchestrator | Thursday 19 June 2025 10:41:40 +0000 (0:00:02.858) 0:01:28.810 ********* 2025-06-19 10:44:16.949020 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.949030 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.949039 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.949048 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.949057 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.949067 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.949076 | orchestrator | 2025-06-19 10:44:16.949085 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-19 10:44:16.949095 | orchestrator | Thursday 19 June 2025 10:41:43 +0000 (0:00:02.836) 0:01:31.646 ********* 2025-06-19 10:44:16.949104 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.949114 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.949123 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.949132 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.949147 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.949157 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.949166 | orchestrator | 2025-06-19 10:44:16.949175 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-19 10:44:16.949185 | orchestrator | Thursday 19 June 2025 10:41:47 +0000 (0:00:03.873) 0:01:35.520 ********* 2025-06-19 10:44:16.949194 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-19 10:44:16.949204 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.949213 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-19 10:44:16.949222 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.949232 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-19 10:44:16.949241 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.949251 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-19 10:44:16.949260 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.949270 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-19 10:44:16.949279 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.949293 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-19 10:44:16.949303 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.949329 | orchestrator | 2025-06-19 10:44:16.949339 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-19 10:44:16.949348 | orchestrator | Thursday 19 June 2025 10:41:49 +0000 (0:00:02.523) 0:01:38.043 ********* 2025-06-19 10:44:16.949362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.949373 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.949382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.949392 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.949402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.949418 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.949428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.949437 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.949454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.949464 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.949478 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.949488 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.949498 | orchestrator | 2025-06-19 10:44:16.949507 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-19 10:44:16.949516 | orchestrator | Thursday 19 June 2025 10:41:52 +0000 (0:00:02.596) 0:01:40.639 ********* 2025-06-19 10:44:16.949526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.949542 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.949552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.949562 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.949577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.949587 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.949596 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.949606 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.949623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.949633 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.949643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.949658 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.949668 | orchestrator | 2025-06-19 10:44:16.949677 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-19 10:44:16.949687 | orchestrator | Thursday 19 June 2025 10:41:54 +0000 (0:00:02.535) 0:01:43.174 ********* 2025-06-19 10:44:16.949696 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.949706 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.949715 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.949724 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.949734 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.949743 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.949752 | orchestrator | 2025-06-19 10:44:16.949762 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-19 10:44:16.949771 | orchestrator | Thursday 19 June 2025 10:41:57 +0000 (0:00:02.929) 0:01:46.104 ********* 2025-06-19 10:44:16.949781 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.949790 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.949800 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.949809 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:44:16.949819 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:44:16.949828 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:44:16.949838 | orchestrator | 2025-06-19 10:44:16.949847 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-19 10:44:16.949857 | orchestrator | Thursday 19 June 2025 10:42:01 +0000 (0:00:04.175) 0:01:50.279 ********* 2025-06-19 10:44:16.949866 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.949875 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.949885 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.949894 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.949903 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.949913 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.949922 | orchestrator | 2025-06-19 10:44:16.949931 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-19 10:44:16.949941 | orchestrator | Thursday 19 June 2025 10:42:03 +0000 (0:00:01.958) 0:01:52.238 ********* 2025-06-19 10:44:16.949950 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.949960 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.949969 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.949978 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.949988 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.949997 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.950007 | orchestrator | 2025-06-19 10:44:16.950047 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-19 10:44:16.950059 | orchestrator | Thursday 19 June 2025 10:42:06 +0000 (0:00:02.802) 0:01:55.040 ********* 2025-06-19 10:44:16.950069 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.950078 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.950088 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.950099 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.950114 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.950129 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.950145 | orchestrator | 2025-06-19 10:44:16.950162 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-19 10:44:16.950188 | orchestrator | Thursday 19 June 2025 10:42:08 +0000 (0:00:02.038) 0:01:57.079 ********* 2025-06-19 10:44:16.950205 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.950221 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.950231 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.950240 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.950249 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.950259 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.950268 | orchestrator | 2025-06-19 10:44:16.950278 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-19 10:44:16.950287 | orchestrator | Thursday 19 June 2025 10:42:10 +0000 (0:00:02.021) 0:01:59.100 ********* 2025-06-19 10:44:16.950296 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.950363 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.950382 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.950391 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.950401 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.950410 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.950419 | orchestrator | 2025-06-19 10:44:16.950429 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-19 10:44:16.950438 | orchestrator | Thursday 19 June 2025 10:42:13 +0000 (0:00:02.605) 0:02:01.706 ********* 2025-06-19 10:44:16.950447 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.950457 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.950466 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.950475 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.950485 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.950494 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.950503 | orchestrator | 2025-06-19 10:44:16.950513 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-19 10:44:16.950522 | orchestrator | Thursday 19 June 2025 10:42:15 +0000 (0:00:02.132) 0:02:03.838 ********* 2025-06-19 10:44:16.950530 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.950538 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.950545 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.950553 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.950561 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.950568 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.950576 | orchestrator | 2025-06-19 10:44:16.950584 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-19 10:44:16.950591 | orchestrator | Thursday 19 June 2025 10:42:17 +0000 (0:00:01.952) 0:02:05.791 ********* 2025-06-19 10:44:16.950599 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.950607 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.950615 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.950622 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.950630 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.950637 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.950645 | orchestrator | 2025-06-19 10:44:16.950653 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-19 10:44:16.950660 | orchestrator | Thursday 19 June 2025 10:42:19 +0000 (0:00:02.204) 0:02:07.995 ********* 2025-06-19 10:44:16.950668 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-19 10:44:16.950677 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.950684 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-19 10:44:16.950692 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.950700 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-19 10:44:16.950708 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.950716 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-19 10:44:16.950729 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.950737 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-19 10:44:16.950745 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.950753 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-19 10:44:16.950761 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.950768 | orchestrator | 2025-06-19 10:44:16.950776 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-19 10:44:16.950784 | orchestrator | Thursday 19 June 2025 10:42:22 +0000 (0:00:02.878) 0:02:10.874 ********* 2025-06-19 10:44:16.950799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.950807 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.950819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.950827 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.950836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.950844 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.950852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-19 10:44:16.950865 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.950874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.950882 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.950894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-19 10:44:16.950902 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.950910 | orchestrator | 2025-06-19 10:44:16.950918 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-19 10:44:16.950926 | orchestrator | Thursday 19 June 2025 10:42:25 +0000 (0:00:02.921) 0:02:13.795 ********* 2025-06-19 10:44:16.950938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.950946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.950960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-19 10:44:16.950969 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.950982 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.950994 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-19 10:44:16.951002 | orchestrator | 2025-06-19 10:44:16.951010 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-19 10:44:16.951018 | orchestrator | Thursday 19 June 2025 10:42:28 +0000 (0:00:02.846) 0:02:16.641 ********* 2025-06-19 10:44:16.951026 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:16.951034 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:16.951042 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:16.951050 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:44:16.951057 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:44:16.951065 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:44:16.951078 | orchestrator | 2025-06-19 10:44:16.951086 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-19 10:44:16.951094 | orchestrator | Thursday 19 June 2025 10:42:28 +0000 (0:00:00.536) 0:02:17.177 ********* 2025-06-19 10:44:16.951101 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:16.951109 | orchestrator | 2025-06-19 10:44:16.951117 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-19 10:44:16.951125 | orchestrator | Thursday 19 June 2025 10:42:31 +0000 (0:00:02.516) 0:02:19.694 ********* 2025-06-19 10:44:16.951133 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:16.951140 | orchestrator | 2025-06-19 10:44:16.951148 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-19 10:44:16.951156 | orchestrator | Thursday 19 June 2025 10:42:33 +0000 (0:00:02.542) 0:02:22.236 ********* 2025-06-19 10:44:16.951163 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:16.951171 | orchestrator | 2025-06-19 10:44:16.951179 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-19 10:44:16.951187 | orchestrator | Thursday 19 June 2025 10:43:15 +0000 (0:00:41.668) 0:03:03.904 ********* 2025-06-19 10:44:16.951194 | orchestrator | 2025-06-19 10:44:16.951202 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-19 10:44:16.951210 | orchestrator | Thursday 19 June 2025 10:43:15 +0000 (0:00:00.196) 0:03:04.100 ********* 2025-06-19 10:44:16.951218 | orchestrator | 2025-06-19 10:44:16.951226 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-19 10:44:16.951233 | orchestrator | Thursday 19 June 2025 10:43:15 +0000 (0:00:00.127) 0:03:04.228 ********* 2025-06-19 10:44:16.951241 | orchestrator | 2025-06-19 10:44:16.951249 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-19 10:44:16.951257 | orchestrator | Thursday 19 June 2025 10:43:15 +0000 (0:00:00.114) 0:03:04.342 ********* 2025-06-19 10:44:16.951264 | orchestrator | 2025-06-19 10:44:16.951272 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-19 10:44:16.951280 | orchestrator | Thursday 19 June 2025 10:43:16 +0000 (0:00:00.083) 0:03:04.425 ********* 2025-06-19 10:44:16.951288 | orchestrator | 2025-06-19 10:44:16.951296 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-19 10:44:16.951303 | orchestrator | Thursday 19 June 2025 10:43:16 +0000 (0:00:00.299) 0:03:04.725 ********* 2025-06-19 10:44:16.951331 | orchestrator | 2025-06-19 10:44:16.951339 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-19 10:44:16.951347 | orchestrator | Thursday 19 June 2025 10:43:16 +0000 (0:00:00.070) 0:03:04.795 ********* 2025-06-19 10:44:16.951355 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:16.951362 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:44:16.951370 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:44:16.951378 | orchestrator | 2025-06-19 10:44:16.951386 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-19 10:44:16.951393 | orchestrator | Thursday 19 June 2025 10:43:48 +0000 (0:00:32.330) 0:03:37.126 ********* 2025-06-19 10:44:16.951401 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:44:16.951409 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:44:16.951417 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:44:16.951424 | orchestrator | 2025-06-19 10:44:16.951432 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:44:16.951445 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-19 10:44:16.951454 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-19 10:44:16.951462 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-19 10:44:16.951470 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-19 10:44:16.951484 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-19 10:44:16.951492 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-19 10:44:16.951500 | orchestrator | 2025-06-19 10:44:16.951507 | orchestrator | 2025-06-19 10:44:16.951515 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:44:16.951526 | orchestrator | Thursday 19 June 2025 10:44:16 +0000 (0:00:27.574) 0:04:04.701 ********* 2025-06-19 10:44:16.951534 | orchestrator | =============================================================================== 2025-06-19 10:44:16.951542 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.67s 2025-06-19 10:44:16.951550 | orchestrator | neutron : Restart neutron-server container ----------------------------- 32.33s 2025-06-19 10:44:16.951558 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 27.57s 2025-06-19 10:44:16.951565 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.70s 2025-06-19 10:44:16.951573 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.87s 2025-06-19 10:44:16.951581 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.23s 2025-06-19 10:44:16.951588 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.18s 2025-06-19 10:44:16.951596 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.95s 2025-06-19 10:44:16.951604 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 3.87s 2025-06-19 10:44:16.951612 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.87s 2025-06-19 10:44:16.951619 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.68s 2025-06-19 10:44:16.951627 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.62s 2025-06-19 10:44:16.951635 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.59s 2025-06-19 10:44:16.951642 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.32s 2025-06-19 10:44:16.951650 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.14s 2025-06-19 10:44:16.951658 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.02s 2025-06-19 10:44:16.951665 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 2.96s 2025-06-19 10:44:16.951673 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 2.93s 2025-06-19 10:44:16.951681 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.93s 2025-06-19 10:44:16.951688 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 2.92s 2025-06-19 10:44:16.951696 | orchestrator | 2025-06-19 10:44:16 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:16.951704 | orchestrator | 2025-06-19 10:44:16 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:16.951712 | orchestrator | 2025-06-19 10:44:16 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:19.993039 | orchestrator | 2025-06-19 10:44:19 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:19.994221 | orchestrator | 2025-06-19 10:44:19 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:19.995403 | orchestrator | 2025-06-19 10:44:19 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:19.996822 | orchestrator | 2025-06-19 10:44:19 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:19.996872 | orchestrator | 2025-06-19 10:44:19 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:23.037000 | orchestrator | 2025-06-19 10:44:23 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:23.037104 | orchestrator | 2025-06-19 10:44:23 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:23.039901 | orchestrator | 2025-06-19 10:44:23 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:23.039926 | orchestrator | 2025-06-19 10:44:23 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:23.039939 | orchestrator | 2025-06-19 10:44:23 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:26.071474 | orchestrator | 2025-06-19 10:44:26 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:26.071751 | orchestrator | 2025-06-19 10:44:26 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:26.072885 | orchestrator | 2025-06-19 10:44:26 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:26.074603 | orchestrator | 2025-06-19 10:44:26 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:26.074639 | orchestrator | 2025-06-19 10:44:26 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:29.102887 | orchestrator | 2025-06-19 10:44:29 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:29.103182 | orchestrator | 2025-06-19 10:44:29 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:29.104357 | orchestrator | 2025-06-19 10:44:29 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:29.105035 | orchestrator | 2025-06-19 10:44:29 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:29.105062 | orchestrator | 2025-06-19 10:44:29 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:32.137666 | orchestrator | 2025-06-19 10:44:32 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:32.138771 | orchestrator | 2025-06-19 10:44:32 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:32.139731 | orchestrator | 2025-06-19 10:44:32 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:32.143673 | orchestrator | 2025-06-19 10:44:32 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:32.144594 | orchestrator | 2025-06-19 10:44:32 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:35.188113 | orchestrator | 2025-06-19 10:44:35 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:35.190750 | orchestrator | 2025-06-19 10:44:35 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state STARTED 2025-06-19 10:44:35.193794 | orchestrator | 2025-06-19 10:44:35 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:35.195703 | orchestrator | 2025-06-19 10:44:35 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:35.195789 | orchestrator | 2025-06-19 10:44:35 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:38.233839 | orchestrator | 2025-06-19 10:44:38 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:38.234735 | orchestrator | 2025-06-19 10:44:38.234771 | orchestrator | 2025-06-19 10:44:38 | INFO  | Task 62ffb897-9ff7-4a84-816f-2406f8a12bdd is in state SUCCESS 2025-06-19 10:44:38.236014 | orchestrator | 2025-06-19 10:44:38.236048 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:44:38.236061 | orchestrator | 2025-06-19 10:44:38.236072 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:44:38.236084 | orchestrator | Thursday 19 June 2025 10:43:28 +0000 (0:00:00.612) 0:00:00.612 ********* 2025-06-19 10:44:38.236102 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:44:38.236124 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:44:38.236145 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:44:38.236165 | orchestrator | 2025-06-19 10:44:38.236185 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:44:38.236216 | orchestrator | Thursday 19 June 2025 10:43:28 +0000 (0:00:00.338) 0:00:00.951 ********* 2025-06-19 10:44:38.236237 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-19 10:44:38.236256 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-19 10:44:38.236276 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-19 10:44:38.236352 | orchestrator | 2025-06-19 10:44:38.236373 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-19 10:44:38.236393 | orchestrator | 2025-06-19 10:44:38.236413 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-19 10:44:38.236432 | orchestrator | Thursday 19 June 2025 10:43:28 +0000 (0:00:00.331) 0:00:01.282 ********* 2025-06-19 10:44:38.236453 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:44:38.236804 | orchestrator | 2025-06-19 10:44:38.236841 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-19 10:44:38.236854 | orchestrator | Thursday 19 June 2025 10:43:29 +0000 (0:00:00.840) 0:00:02.123 ********* 2025-06-19 10:44:38.236866 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-19 10:44:38.236878 | orchestrator | 2025-06-19 10:44:38.236890 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-19 10:44:38.236903 | orchestrator | Thursday 19 June 2025 10:43:33 +0000 (0:00:03.789) 0:00:05.913 ********* 2025-06-19 10:44:38.236915 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-19 10:44:38.236927 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-19 10:44:38.236940 | orchestrator | 2025-06-19 10:44:38.236951 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-19 10:44:38.236965 | orchestrator | Thursday 19 June 2025 10:43:39 +0000 (0:00:06.558) 0:00:12.473 ********* 2025-06-19 10:44:38.236984 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-19 10:44:38.237002 | orchestrator | 2025-06-19 10:44:38.237020 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-19 10:44:38.237038 | orchestrator | Thursday 19 June 2025 10:43:43 +0000 (0:00:03.526) 0:00:15.999 ********* 2025-06-19 10:44:38.237055 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-19 10:44:38.237073 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-19 10:44:38.237092 | orchestrator | 2025-06-19 10:44:38.237110 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-19 10:44:38.237127 | orchestrator | Thursday 19 June 2025 10:43:47 +0000 (0:00:03.834) 0:00:19.834 ********* 2025-06-19 10:44:38.237138 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-19 10:44:38.237149 | orchestrator | 2025-06-19 10:44:38.237175 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-19 10:44:38.237187 | orchestrator | Thursday 19 June 2025 10:43:50 +0000 (0:00:03.708) 0:00:23.542 ********* 2025-06-19 10:44:38.237197 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-19 10:44:38.237208 | orchestrator | 2025-06-19 10:44:38.237218 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-19 10:44:38.237246 | orchestrator | Thursday 19 June 2025 10:43:55 +0000 (0:00:04.383) 0:00:27.926 ********* 2025-06-19 10:44:38.237257 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:38.237268 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:38.237279 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:38.237337 | orchestrator | 2025-06-19 10:44:38.237349 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-19 10:44:38.237359 | orchestrator | Thursday 19 June 2025 10:43:55 +0000 (0:00:00.358) 0:00:28.284 ********* 2025-06-19 10:44:38.237373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.237407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.237420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.237432 | orchestrator | 2025-06-19 10:44:38.237443 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-19 10:44:38.237454 | orchestrator | Thursday 19 June 2025 10:43:56 +0000 (0:00:00.967) 0:00:29.252 ********* 2025-06-19 10:44:38.237465 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:38.237475 | orchestrator | 2025-06-19 10:44:38.237486 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-19 10:44:38.237497 | orchestrator | Thursday 19 June 2025 10:43:56 +0000 (0:00:00.097) 0:00:29.349 ********* 2025-06-19 10:44:38.237507 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:38.237518 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:38.237529 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:38.237556 | orchestrator | 2025-06-19 10:44:38.237567 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-19 10:44:38.237577 | orchestrator | Thursday 19 June 2025 10:43:57 +0000 (0:00:00.350) 0:00:29.700 ********* 2025-06-19 10:44:38.237595 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:44:38.237606 | orchestrator | 2025-06-19 10:44:38.237616 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-19 10:44:38.237627 | orchestrator | Thursday 19 June 2025 10:43:57 +0000 (0:00:00.394) 0:00:30.094 ********* 2025-06-19 10:44:38.237639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.237660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.237672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.237684 | orchestrator | 2025-06-19 10:44:38.237694 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-19 10:44:38.237705 | orchestrator | Thursday 19 June 2025 10:43:59 +0000 (0:00:01.719) 0:00:31.813 ********* 2025-06-19 10:44:38.237716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:44:38.237739 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:38.237755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:44:38.237767 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:38.237785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:44:38.237796 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:38.237807 | orchestrator | 2025-06-19 10:44:38.237818 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-19 10:44:38.237829 | orchestrator | Thursday 19 June 2025 10:44:00 +0000 (0:00:00.801) 0:00:32.614 ********* 2025-06-19 10:44:38.237840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:44:38.237851 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:38.237863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:44:38.237880 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:38.237896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:44:38.237907 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:38.237918 | orchestrator | 2025-06-19 10:44:38.237928 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-19 10:44:38.237939 | orchestrator | Thursday 19 June 2025 10:44:00 +0000 (0:00:00.813) 0:00:33.427 ********* 2025-06-19 10:44:38.237957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.237969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.237981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.237999 | orchestrator | 2025-06-19 10:44:38.238010 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-19 10:44:38.238069 | orchestrator | Thursday 19 June 2025 10:44:02 +0000 (0:00:01.775) 0:00:35.203 ********* 2025-06-19 10:44:38.238087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.238099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.238119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.238131 | orchestrator | 2025-06-19 10:44:38.238142 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-19 10:44:38.238152 | orchestrator | Thursday 19 June 2025 10:44:05 +0000 (0:00:02.934) 0:00:38.137 ********* 2025-06-19 10:44:38.238163 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-19 10:44:38.238180 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-19 10:44:38.238191 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-19 10:44:38.238202 | orchestrator | 2025-06-19 10:44:38.238213 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-19 10:44:38.238223 | orchestrator | Thursday 19 June 2025 10:44:06 +0000 (0:00:01.277) 0:00:39.415 ********* 2025-06-19 10:44:38.238234 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:38.238245 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:44:38.238255 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:44:38.238266 | orchestrator | 2025-06-19 10:44:38.238276 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-19 10:44:38.238316 | orchestrator | Thursday 19 June 2025 10:44:08 +0000 (0:00:01.203) 0:00:40.618 ********* 2025-06-19 10:44:38.238333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:44:38.238345 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:38.238356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:44:38.238367 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:38.238385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-19 10:44:38.238397 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:38.238407 | orchestrator | 2025-06-19 10:44:38.238418 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-19 10:44:38.238435 | orchestrator | Thursday 19 June 2025 10:44:08 +0000 (0:00:00.468) 0:00:41.087 ********* 2025-06-19 10:44:38.238446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.238458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.238475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-19 10:44:38.238486 | orchestrator | 2025-06-19 10:44:38.238497 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-19 10:44:38.238508 | orchestrator | Thursday 19 June 2025 10:44:09 +0000 (0:00:01.430) 0:00:42.518 ********* 2025-06-19 10:44:38.238518 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:38.238529 | orchestrator | 2025-06-19 10:44:38.238540 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-19 10:44:38.238550 | orchestrator | Thursday 19 June 2025 10:44:11 +0000 (0:00:01.994) 0:00:44.512 ********* 2025-06-19 10:44:38.238561 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:38.238572 | orchestrator | 2025-06-19 10:44:38.238582 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-19 10:44:38.238593 | orchestrator | Thursday 19 June 2025 10:44:14 +0000 (0:00:02.484) 0:00:46.997 ********* 2025-06-19 10:44:38.238603 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:38.238614 | orchestrator | 2025-06-19 10:44:38.238625 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-19 10:44:38.238635 | orchestrator | Thursday 19 June 2025 10:44:26 +0000 (0:00:11.923) 0:00:58.920 ********* 2025-06-19 10:44:38.238652 | orchestrator | 2025-06-19 10:44:38.238663 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-19 10:44:38.238673 | orchestrator | Thursday 19 June 2025 10:44:26 +0000 (0:00:00.140) 0:00:59.061 ********* 2025-06-19 10:44:38.238684 | orchestrator | 2025-06-19 10:44:38.238701 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-19 10:44:38.238713 | orchestrator | Thursday 19 June 2025 10:44:26 +0000 (0:00:00.150) 0:00:59.213 ********* 2025-06-19 10:44:38.238723 | orchestrator | 2025-06-19 10:44:38.238734 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-19 10:44:38.238744 | orchestrator | Thursday 19 June 2025 10:44:26 +0000 (0:00:00.137) 0:00:59.350 ********* 2025-06-19 10:44:38.238755 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:38.238766 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:44:38.238777 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:44:38.238787 | orchestrator | 2025-06-19 10:44:38.238798 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:44:38.238810 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:44:38.238823 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:44:38.238834 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:44:38.238845 | orchestrator | 2025-06-19 10:44:38.238855 | orchestrator | 2025-06-19 10:44:38.238866 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:44:38.238877 | orchestrator | Thursday 19 June 2025 10:44:37 +0000 (0:00:10.804) 0:01:10.155 ********* 2025-06-19 10:44:38.238888 | orchestrator | =============================================================================== 2025-06-19 10:44:38.238898 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.92s 2025-06-19 10:44:38.238909 | orchestrator | placement : Restart placement-api container ---------------------------- 10.80s 2025-06-19 10:44:38.238920 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.56s 2025-06-19 10:44:38.238930 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.38s 2025-06-19 10:44:38.238941 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.83s 2025-06-19 10:44:38.238952 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.79s 2025-06-19 10:44:38.238962 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.71s 2025-06-19 10:44:38.238973 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.53s 2025-06-19 10:44:38.238983 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.93s 2025-06-19 10:44:38.238994 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.48s 2025-06-19 10:44:38.239005 | orchestrator | placement : Creating placement databases -------------------------------- 1.99s 2025-06-19 10:44:38.239015 | orchestrator | placement : Copying over config.json files for services ----------------- 1.78s 2025-06-19 10:44:38.239026 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.72s 2025-06-19 10:44:38.239036 | orchestrator | placement : Check placement containers ---------------------------------- 1.43s 2025-06-19 10:44:38.239047 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.28s 2025-06-19 10:44:38.239062 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.20s 2025-06-19 10:44:38.239073 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.97s 2025-06-19 10:44:38.239083 | orchestrator | placement : include_tasks ----------------------------------------------- 0.84s 2025-06-19 10:44:38.239094 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.81s 2025-06-19 10:44:38.239110 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.80s 2025-06-19 10:44:38.239121 | orchestrator | 2025-06-19 10:44:38 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:38.239132 | orchestrator | 2025-06-19 10:44:38 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:38.239143 | orchestrator | 2025-06-19 10:44:38 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:41.271665 | orchestrator | 2025-06-19 10:44:41 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:44:41.273140 | orchestrator | 2025-06-19 10:44:41 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:41.274829 | orchestrator | 2025-06-19 10:44:41 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:41.276390 | orchestrator | 2025-06-19 10:44:41 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:41.276514 | orchestrator | 2025-06-19 10:44:41 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:44.321370 | orchestrator | 2025-06-19 10:44:44 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:44:44.321964 | orchestrator | 2025-06-19 10:44:44 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:44.323026 | orchestrator | 2025-06-19 10:44:44 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:44.324005 | orchestrator | 2025-06-19 10:44:44 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:44.324029 | orchestrator | 2025-06-19 10:44:44 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:47.372121 | orchestrator | 2025-06-19 10:44:47 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:44:47.373833 | orchestrator | 2025-06-19 10:44:47 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:47.373867 | orchestrator | 2025-06-19 10:44:47 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:47.373879 | orchestrator | 2025-06-19 10:44:47 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:47.373891 | orchestrator | 2025-06-19 10:44:47 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:50.396415 | orchestrator | 2025-06-19 10:44:50 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:44:50.398942 | orchestrator | 2025-06-19 10:44:50 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state STARTED 2025-06-19 10:44:50.398968 | orchestrator | 2025-06-19 10:44:50 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:50.399511 | orchestrator | 2025-06-19 10:44:50 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:50.399531 | orchestrator | 2025-06-19 10:44:50 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:53.451982 | orchestrator | 2025-06-19 10:44:53 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:44:53.456088 | orchestrator | 2025-06-19 10:44:53 | INFO  | Task 8d4e9e99-0749-4f59-a3d7-ae4a3b67fa7c is in state SUCCESS 2025-06-19 10:44:53.458356 | orchestrator | 2025-06-19 10:44:53.458394 | orchestrator | 2025-06-19 10:44:53.458407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:44:53.458419 | orchestrator | 2025-06-19 10:44:53.458430 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:44:53.458441 | orchestrator | Thursday 19 June 2025 10:42:00 +0000 (0:00:00.212) 0:00:00.212 ********* 2025-06-19 10:44:53.458477 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:44:53.458490 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:44:53.458501 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:44:53.458512 | orchestrator | 2025-06-19 10:44:53.458614 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:44:53.458745 | orchestrator | Thursday 19 June 2025 10:42:01 +0000 (0:00:00.302) 0:00:00.515 ********* 2025-06-19 10:44:53.458810 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-19 10:44:53.458871 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-19 10:44:53.458885 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-19 10:44:53.459476 | orchestrator | 2025-06-19 10:44:53.459524 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-19 10:44:53.459541 | orchestrator | 2025-06-19 10:44:53.459578 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-19 10:44:53.459598 | orchestrator | Thursday 19 June 2025 10:42:01 +0000 (0:00:00.346) 0:00:00.861 ********* 2025-06-19 10:44:53.459617 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:44:53.459636 | orchestrator | 2025-06-19 10:44:53.459654 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-19 10:44:53.459666 | orchestrator | Thursday 19 June 2025 10:42:01 +0000 (0:00:00.423) 0:00:01.284 ********* 2025-06-19 10:44:53.459676 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-19 10:44:53.459687 | orchestrator | 2025-06-19 10:44:53.459698 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-19 10:44:53.459709 | orchestrator | Thursday 19 June 2025 10:42:05 +0000 (0:00:03.553) 0:00:04.838 ********* 2025-06-19 10:44:53.459720 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-19 10:44:53.459730 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-19 10:44:53.459741 | orchestrator | 2025-06-19 10:44:53.459752 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-19 10:44:53.459763 | orchestrator | Thursday 19 June 2025 10:42:11 +0000 (0:00:06.503) 0:00:11.341 ********* 2025-06-19 10:44:53.459774 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-19 10:44:53.459785 | orchestrator | 2025-06-19 10:44:53.459795 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-19 10:44:53.459806 | orchestrator | Thursday 19 June 2025 10:42:15 +0000 (0:00:03.710) 0:00:15.052 ********* 2025-06-19 10:44:53.459817 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-19 10:44:53.459827 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-19 10:44:53.460831 | orchestrator | 2025-06-19 10:44:53.460872 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-19 10:44:53.460884 | orchestrator | Thursday 19 June 2025 10:42:18 +0000 (0:00:03.384) 0:00:18.436 ********* 2025-06-19 10:44:53.460895 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-19 10:44:53.460907 | orchestrator | 2025-06-19 10:44:53.460918 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-19 10:44:53.460929 | orchestrator | Thursday 19 June 2025 10:42:22 +0000 (0:00:03.332) 0:00:21.768 ********* 2025-06-19 10:44:53.460940 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-19 10:44:53.460950 | orchestrator | 2025-06-19 10:44:53.460961 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-19 10:44:53.460972 | orchestrator | Thursday 19 June 2025 10:42:26 +0000 (0:00:04.334) 0:00:26.102 ********* 2025-06-19 10:44:53.460987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.461070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.461093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.461106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461465 | orchestrator | 2025-06-19 10:44:53.461479 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-19 10:44:53.461491 | orchestrator | Thursday 19 June 2025 10:42:29 +0000 (0:00:03.205) 0:00:29.308 ********* 2025-06-19 10:44:53.461503 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:53.461524 | orchestrator | 2025-06-19 10:44:53.461536 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-19 10:44:53.461548 | orchestrator | Thursday 19 June 2025 10:42:29 +0000 (0:00:00.121) 0:00:29.430 ********* 2025-06-19 10:44:53.461559 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:53.461571 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:53.461583 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:53.461595 | orchestrator | 2025-06-19 10:44:53.461607 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-19 10:44:53.461619 | orchestrator | Thursday 19 June 2025 10:42:30 +0000 (0:00:00.243) 0:00:29.673 ********* 2025-06-19 10:44:53.461632 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:44:53.461644 | orchestrator | 2025-06-19 10:44:53.461657 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-19 10:44:53.461669 | orchestrator | Thursday 19 June 2025 10:42:30 +0000 (0:00:00.573) 0:00:30.246 ********* 2025-06-19 10:44:53.461682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.461729 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.461750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.461764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.461995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.462006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.462057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.462071 | orchestrator | 2025-06-19 10:44:53.462082 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-19 10:44:53.462093 | orchestrator | Thursday 19 June 2025 10:42:37 +0000 (0:00:06.318) 0:00:36.565 ********* 2025-06-19 10:44:53.462105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.462116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:44:53.462163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462224 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:53.462235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.462247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:44:53.462319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462385 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:53.462396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.462407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:44:53.462450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462508 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:53.462519 | orchestrator | 2025-06-19 10:44:53.462530 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-19 10:44:53.462541 | orchestrator | Thursday 19 June 2025 10:42:37 +0000 (0:00:00.691) 0:00:37.256 ********* 2025-06-19 10:44:53.462552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.462563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:44:53.462603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462662 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:53.462673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.462685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:44:53.462725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.462772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:44:53.462795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.462896 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:53.462907 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:53.462918 | orchestrator | 2025-06-19 10:44:53.463018 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-19 10:44:53.463036 | orchestrator | Thursday 19 June 2025 10:42:39 +0000 (0:00:01.231) 0:00:38.488 ********* 2025-06-19 10:44:53.463047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.463059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.463103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.463130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463424 | orchestrator | 2025-06-19 10:44:53.463435 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-19 10:44:53.463446 | orchestrator | Thursday 19 June 2025 10:42:45 +0000 (0:00:06.395) 0:00:44.883 ********* 2025-06-19 10:44:53.463457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.463469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.463480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.463504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463520 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463713 | orchestrator | 2025-06-19 10:44:53.463724 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-19 10:44:53.463735 | orchestrator | Thursday 19 June 2025 10:43:00 +0000 (0:00:14.879) 0:00:59.762 ********* 2025-06-19 10:44:53.463746 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-19 10:44:53.463757 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-19 10:44:53.463768 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-19 10:44:53.463778 | orchestrator | 2025-06-19 10:44:53.463789 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-19 10:44:53.463800 | orchestrator | Thursday 19 June 2025 10:43:03 +0000 (0:00:03.386) 0:01:03.149 ********* 2025-06-19 10:44:53.463811 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-19 10:44:53.463822 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-19 10:44:53.463832 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-19 10:44:53.463843 | orchestrator | 2025-06-19 10:44:53.463854 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-19 10:44:53.463864 | orchestrator | Thursday 19 June 2025 10:43:06 +0000 (0:00:02.601) 0:01:05.751 ********* 2025-06-19 10:44:53.463903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.463922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.463941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.463958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.463970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.463981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.463992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464164 | orchestrator | 2025-06-19 10:44:53.464175 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-19 10:44:53.464186 | orchestrator | Thursday 19 June 2025 10:43:09 +0000 (0:00:02.801) 0:01:08.552 ********* 2025-06-19 10:44:53.464198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.464215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.464227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.464244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.464651 | orchestrator | 2025-06-19 10:44:53.464661 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-19 10:44:53.464675 | orchestrator | Thursday 19 June 2025 10:43:11 +0000 (0:00:02.579) 0:01:11.132 ********* 2025-06-19 10:44:53.464685 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:53.464696 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:53.464705 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:53.464715 | orchestrator | 2025-06-19 10:44:53.464725 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-19 10:44:53.464734 | orchestrator | Thursday 19 June 2025 10:43:12 +0000 (0:00:00.545) 0:01:11.678 ********* 2025-06-19 10:44:53.464751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.464761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:44:53.464771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464828 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:53.464838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.464848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:44:53.464858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464909 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:53.464923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-19 10:44:53.464934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-19 10:44:53.464944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:44:53.464996 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:53.465006 | orchestrator | 2025-06-19 10:44:53.465015 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-19 10:44:53.465025 | orchestrator | Thursday 19 June 2025 10:43:13 +0000 (0:00:01.036) 0:01:12.714 ********* 2025-06-19 10:44:53.465039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.465049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.465060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-19 10:44:53.465070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:44:53.465412 | orchestrator | 2025-06-19 10:44:53.465427 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-19 10:44:53.465443 | orchestrator | Thursday 19 June 2025 10:43:17 +0000 (0:00:04.687) 0:01:17.401 ********* 2025-06-19 10:44:53.465458 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:44:53.465473 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:44:53.465488 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:44:53.465503 | orchestrator | 2025-06-19 10:44:53.465518 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-19 10:44:53.465533 | orchestrator | Thursday 19 June 2025 10:43:18 +0000 (0:00:00.582) 0:01:17.984 ********* 2025-06-19 10:44:53.465548 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-19 10:44:53.465563 | orchestrator | 2025-06-19 10:44:53.465579 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-19 10:44:53.465595 | orchestrator | Thursday 19 June 2025 10:43:20 +0000 (0:00:02.367) 0:01:20.351 ********* 2025-06-19 10:44:53.465618 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-19 10:44:53.465634 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-19 10:44:53.465650 | orchestrator | 2025-06-19 10:44:53.465667 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-19 10:44:53.465682 | orchestrator | Thursday 19 June 2025 10:43:23 +0000 (0:00:02.237) 0:01:22.589 ********* 2025-06-19 10:44:53.465698 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:53.465716 | orchestrator | 2025-06-19 10:44:53.465732 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-19 10:44:53.465749 | orchestrator | Thursday 19 June 2025 10:43:39 +0000 (0:00:16.225) 0:01:38.814 ********* 2025-06-19 10:44:53.465762 | orchestrator | 2025-06-19 10:44:53.465772 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-19 10:44:53.465789 | orchestrator | Thursday 19 June 2025 10:43:39 +0000 (0:00:00.161) 0:01:38.975 ********* 2025-06-19 10:44:53.465804 | orchestrator | 2025-06-19 10:44:53.465820 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-19 10:44:53.465837 | orchestrator | Thursday 19 June 2025 10:43:39 +0000 (0:00:00.155) 0:01:39.131 ********* 2025-06-19 10:44:53.465853 | orchestrator | 2025-06-19 10:44:53.465871 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-19 10:44:53.465886 | orchestrator | Thursday 19 June 2025 10:43:39 +0000 (0:00:00.156) 0:01:39.287 ********* 2025-06-19 10:44:53.465902 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:53.465912 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:44:53.465921 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:44:53.465931 | orchestrator | 2025-06-19 10:44:53.465940 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-19 10:44:53.465949 | orchestrator | Thursday 19 June 2025 10:43:48 +0000 (0:00:08.655) 0:01:47.943 ********* 2025-06-19 10:44:53.465959 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:44:53.465968 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:53.465978 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:44:53.465987 | orchestrator | 2025-06-19 10:44:53.465997 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-19 10:44:53.466006 | orchestrator | Thursday 19 June 2025 10:44:00 +0000 (0:00:12.284) 0:02:00.228 ********* 2025-06-19 10:44:53.466065 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:53.466078 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:44:53.466088 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:44:53.466097 | orchestrator | 2025-06-19 10:44:53.466106 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-19 10:44:53.466116 | orchestrator | Thursday 19 June 2025 10:44:12 +0000 (0:00:11.732) 0:02:11.960 ********* 2025-06-19 10:44:53.466136 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:53.466145 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:44:53.466155 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:44:53.466164 | orchestrator | 2025-06-19 10:44:53.466174 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-19 10:44:53.466183 | orchestrator | Thursday 19 June 2025 10:44:24 +0000 (0:00:12.075) 0:02:24.036 ********* 2025-06-19 10:44:53.466193 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:44:53.466203 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:53.466212 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:44:53.466222 | orchestrator | 2025-06-19 10:44:53.466231 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-19 10:44:53.466241 | orchestrator | Thursday 19 June 2025 10:44:37 +0000 (0:00:12.928) 0:02:36.965 ********* 2025-06-19 10:44:53.466250 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:53.466260 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:44:53.466289 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:44:53.466299 | orchestrator | 2025-06-19 10:44:53.466308 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-19 10:44:53.466318 | orchestrator | Thursday 19 June 2025 10:44:44 +0000 (0:00:07.236) 0:02:44.201 ********* 2025-06-19 10:44:53.466328 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:44:53.466337 | orchestrator | 2025-06-19 10:44:53.466347 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:44:53.466357 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-19 10:44:53.466368 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:44:53.466378 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:44:53.466388 | orchestrator | 2025-06-19 10:44:53.466397 | orchestrator | 2025-06-19 10:44:53.466416 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:44:53.466426 | orchestrator | Thursday 19 June 2025 10:44:51 +0000 (0:00:06.766) 0:02:50.968 ********* 2025-06-19 10:44:53.466436 | orchestrator | =============================================================================== 2025-06-19 10:44:53.466445 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.22s 2025-06-19 10:44:53.466455 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.88s 2025-06-19 10:44:53.466465 | orchestrator | designate : Restart designate-mdns container --------------------------- 12.93s 2025-06-19 10:44:53.466474 | orchestrator | designate : Restart designate-api container ---------------------------- 12.28s 2025-06-19 10:44:53.466483 | orchestrator | designate : Restart designate-producer container ----------------------- 12.08s 2025-06-19 10:44:53.466493 | orchestrator | designate : Restart designate-central container ------------------------ 11.73s 2025-06-19 10:44:53.466502 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.66s 2025-06-19 10:44:53.466512 | orchestrator | designate : Restart designate-worker container -------------------------- 7.24s 2025-06-19 10:44:53.466527 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.77s 2025-06-19 10:44:53.466536 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.50s 2025-06-19 10:44:53.466546 | orchestrator | designate : Copying over config.json files for services ----------------- 6.40s 2025-06-19 10:44:53.466556 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.32s 2025-06-19 10:44:53.466565 | orchestrator | designate : Check designate containers ---------------------------------- 4.69s 2025-06-19 10:44:53.466574 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.33s 2025-06-19 10:44:53.466584 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.71s 2025-06-19 10:44:53.466600 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.55s 2025-06-19 10:44:53.466609 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.39s 2025-06-19 10:44:53.466619 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.38s 2025-06-19 10:44:53.466629 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.33s 2025-06-19 10:44:53.466638 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.21s 2025-06-19 10:44:53.466766 | orchestrator | 2025-06-19 10:44:53 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:53.466782 | orchestrator | 2025-06-19 10:44:53 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:53.466796 | orchestrator | 2025-06-19 10:44:53 | INFO  | Task 1962752a-9b82-4bc7-82ed-80690898a35d is in state STARTED 2025-06-19 10:44:53.466806 | orchestrator | 2025-06-19 10:44:53 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:56.523113 | orchestrator | 2025-06-19 10:44:56 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:44:56.524431 | orchestrator | 2025-06-19 10:44:56 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:56.526491 | orchestrator | 2025-06-19 10:44:56 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:56.527326 | orchestrator | 2025-06-19 10:44:56 | INFO  | Task 1962752a-9b82-4bc7-82ed-80690898a35d is in state STARTED 2025-06-19 10:44:56.527524 | orchestrator | 2025-06-19 10:44:56 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:44:59.565000 | orchestrator | 2025-06-19 10:44:59 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:44:59.565107 | orchestrator | 2025-06-19 10:44:59 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:44:59.568588 | orchestrator | 2025-06-19 10:44:59 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:44:59.569293 | orchestrator | 2025-06-19 10:44:59 | INFO  | Task 1962752a-9b82-4bc7-82ed-80690898a35d is in state SUCCESS 2025-06-19 10:44:59.569318 | orchestrator | 2025-06-19 10:44:59 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:02.612740 | orchestrator | 2025-06-19 10:45:02 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:02.612841 | orchestrator | 2025-06-19 10:45:02 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:02.614335 | orchestrator | 2025-06-19 10:45:02 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:02.615975 | orchestrator | 2025-06-19 10:45:02 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:02.616008 | orchestrator | 2025-06-19 10:45:02 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:05.659203 | orchestrator | 2025-06-19 10:45:05 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:05.661773 | orchestrator | 2025-06-19 10:45:05 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:05.663029 | orchestrator | 2025-06-19 10:45:05 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:05.664869 | orchestrator | 2025-06-19 10:45:05 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:05.665282 | orchestrator | 2025-06-19 10:45:05 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:08.705140 | orchestrator | 2025-06-19 10:45:08 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:08.707734 | orchestrator | 2025-06-19 10:45:08 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:08.709511 | orchestrator | 2025-06-19 10:45:08 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:08.711230 | orchestrator | 2025-06-19 10:45:08 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:08.711313 | orchestrator | 2025-06-19 10:45:08 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:11.754342 | orchestrator | 2025-06-19 10:45:11 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:11.755853 | orchestrator | 2025-06-19 10:45:11 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:11.756207 | orchestrator | 2025-06-19 10:45:11 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:11.757350 | orchestrator | 2025-06-19 10:45:11 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:11.757372 | orchestrator | 2025-06-19 10:45:11 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:14.799668 | orchestrator | 2025-06-19 10:45:14 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:14.803854 | orchestrator | 2025-06-19 10:45:14 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:14.806140 | orchestrator | 2025-06-19 10:45:14 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:14.807984 | orchestrator | 2025-06-19 10:45:14 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:14.808007 | orchestrator | 2025-06-19 10:45:14 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:17.845914 | orchestrator | 2025-06-19 10:45:17 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:17.847838 | orchestrator | 2025-06-19 10:45:17 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:17.849915 | orchestrator | 2025-06-19 10:45:17 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:17.852867 | orchestrator | 2025-06-19 10:45:17 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:17.852891 | orchestrator | 2025-06-19 10:45:17 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:20.893905 | orchestrator | 2025-06-19 10:45:20 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:20.896147 | orchestrator | 2025-06-19 10:45:20 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:20.897487 | orchestrator | 2025-06-19 10:45:20 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:20.900449 | orchestrator | 2025-06-19 10:45:20 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:20.900485 | orchestrator | 2025-06-19 10:45:20 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:23.939063 | orchestrator | 2025-06-19 10:45:23 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:23.940426 | orchestrator | 2025-06-19 10:45:23 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:23.940588 | orchestrator | 2025-06-19 10:45:23 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:23.941518 | orchestrator | 2025-06-19 10:45:23 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:23.941662 | orchestrator | 2025-06-19 10:45:23 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:26.987647 | orchestrator | 2025-06-19 10:45:26 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:26.989349 | orchestrator | 2025-06-19 10:45:26 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:26.991085 | orchestrator | 2025-06-19 10:45:26 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:26.994344 | orchestrator | 2025-06-19 10:45:26 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:26.994372 | orchestrator | 2025-06-19 10:45:26 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:30.049945 | orchestrator | 2025-06-19 10:45:30 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:30.050133 | orchestrator | 2025-06-19 10:45:30 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:30.050163 | orchestrator | 2025-06-19 10:45:30 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:30.050207 | orchestrator | 2025-06-19 10:45:30 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:30.050227 | orchestrator | 2025-06-19 10:45:30 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:33.085741 | orchestrator | 2025-06-19 10:45:33 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:33.086519 | orchestrator | 2025-06-19 10:45:33 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:33.088150 | orchestrator | 2025-06-19 10:45:33 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:33.089295 | orchestrator | 2025-06-19 10:45:33 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:33.089336 | orchestrator | 2025-06-19 10:45:33 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:36.155211 | orchestrator | 2025-06-19 10:45:36 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:36.157331 | orchestrator | 2025-06-19 10:45:36 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:36.159725 | orchestrator | 2025-06-19 10:45:36 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:36.161886 | orchestrator | 2025-06-19 10:45:36 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:36.161929 | orchestrator | 2025-06-19 10:45:36 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:39.213363 | orchestrator | 2025-06-19 10:45:39 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:39.216054 | orchestrator | 2025-06-19 10:45:39 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:39.219570 | orchestrator | 2025-06-19 10:45:39 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:39.221839 | orchestrator | 2025-06-19 10:45:39 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:39.221870 | orchestrator | 2025-06-19 10:45:39 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:42.274013 | orchestrator | 2025-06-19 10:45:42 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:42.274889 | orchestrator | 2025-06-19 10:45:42 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:42.275664 | orchestrator | 2025-06-19 10:45:42 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:42.276450 | orchestrator | 2025-06-19 10:45:42 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:42.276719 | orchestrator | 2025-06-19 10:45:42 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:45.327051 | orchestrator | 2025-06-19 10:45:45 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:45.328928 | orchestrator | 2025-06-19 10:45:45 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:45.330829 | orchestrator | 2025-06-19 10:45:45 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:45.332972 | orchestrator | 2025-06-19 10:45:45 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:45.333652 | orchestrator | 2025-06-19 10:45:45 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:48.385266 | orchestrator | 2025-06-19 10:45:48 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:48.387139 | orchestrator | 2025-06-19 10:45:48 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:48.387177 | orchestrator | 2025-06-19 10:45:48 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:48.388930 | orchestrator | 2025-06-19 10:45:48 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:48.389048 | orchestrator | 2025-06-19 10:45:48 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:51.450590 | orchestrator | 2025-06-19 10:45:51 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:51.453258 | orchestrator | 2025-06-19 10:45:51 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:51.455411 | orchestrator | 2025-06-19 10:45:51 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:51.457141 | orchestrator | 2025-06-19 10:45:51 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:51.457184 | orchestrator | 2025-06-19 10:45:51 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:54.496632 | orchestrator | 2025-06-19 10:45:54 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:54.497043 | orchestrator | 2025-06-19 10:45:54 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:54.497977 | orchestrator | 2025-06-19 10:45:54 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:54.498827 | orchestrator | 2025-06-19 10:45:54 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:54.498850 | orchestrator | 2025-06-19 10:45:54 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:45:57.539661 | orchestrator | 2025-06-19 10:45:57 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:45:57.539765 | orchestrator | 2025-06-19 10:45:57 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:45:57.540072 | orchestrator | 2025-06-19 10:45:57 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:45:57.540879 | orchestrator | 2025-06-19 10:45:57 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:45:57.541054 | orchestrator | 2025-06-19 10:45:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:00.593642 | orchestrator | 2025-06-19 10:46:00 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:00.595842 | orchestrator | 2025-06-19 10:46:00 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:46:00.598468 | orchestrator | 2025-06-19 10:46:00 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:00.601392 | orchestrator | 2025-06-19 10:46:00 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:00.601427 | orchestrator | 2025-06-19 10:46:00 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:03.654543 | orchestrator | 2025-06-19 10:46:03 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:03.655541 | orchestrator | 2025-06-19 10:46:03 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:46:03.656274 | orchestrator | 2025-06-19 10:46:03 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:03.657235 | orchestrator | 2025-06-19 10:46:03 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:03.657257 | orchestrator | 2025-06-19 10:46:03 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:06.702252 | orchestrator | 2025-06-19 10:46:06 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:06.703088 | orchestrator | 2025-06-19 10:46:06 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:46:06.705037 | orchestrator | 2025-06-19 10:46:06 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:06.707995 | orchestrator | 2025-06-19 10:46:06 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:06.708028 | orchestrator | 2025-06-19 10:46:06 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:09.755080 | orchestrator | 2025-06-19 10:46:09 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:09.756534 | orchestrator | 2025-06-19 10:46:09 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:46:09.758100 | orchestrator | 2025-06-19 10:46:09 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:09.759839 | orchestrator | 2025-06-19 10:46:09 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:09.759878 | orchestrator | 2025-06-19 10:46:09 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:12.802983 | orchestrator | 2025-06-19 10:46:12 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:12.804376 | orchestrator | 2025-06-19 10:46:12 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state STARTED 2025-06-19 10:46:12.806061 | orchestrator | 2025-06-19 10:46:12 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:12.807975 | orchestrator | 2025-06-19 10:46:12 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:12.808072 | orchestrator | 2025-06-19 10:46:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:15.841446 | orchestrator | 2025-06-19 10:46:15 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:15.844090 | orchestrator | 2025-06-19 10:46:15 | INFO  | Task 5a9378df-5449-4d0f-8b6d-5a37c0a8a559 is in state SUCCESS 2025-06-19 10:46:15.846628 | orchestrator | 2025-06-19 10:46:15.846671 | orchestrator | 2025-06-19 10:46:15.846683 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:46:15.846696 | orchestrator | 2025-06-19 10:46:15.846707 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:46:15.846744 | orchestrator | Thursday 19 June 2025 10:44:56 +0000 (0:00:00.262) 0:00:00.262 ********* 2025-06-19 10:46:15.846756 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:46:15.846768 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:46:15.846779 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:46:15.846789 | orchestrator | 2025-06-19 10:46:15.846800 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:46:15.846811 | orchestrator | Thursday 19 June 2025 10:44:57 +0000 (0:00:00.381) 0:00:00.644 ********* 2025-06-19 10:46:15.846821 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-19 10:46:15.846832 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-19 10:46:15.846843 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-19 10:46:15.846853 | orchestrator | 2025-06-19 10:46:15.846864 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-19 10:46:15.846875 | orchestrator | 2025-06-19 10:46:15.846886 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-19 10:46:15.846896 | orchestrator | Thursday 19 June 2025 10:44:58 +0000 (0:00:00.961) 0:00:01.606 ********* 2025-06-19 10:46:15.846907 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:46:15.846917 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:46:15.846928 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:46:15.846945 | orchestrator | 2025-06-19 10:46:15.846963 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:46:15.846983 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:46:15.847006 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:46:15.847026 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:46:15.847046 | orchestrator | 2025-06-19 10:46:15.847066 | orchestrator | 2025-06-19 10:46:15.847084 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:46:15.847103 | orchestrator | Thursday 19 June 2025 10:44:58 +0000 (0:00:00.756) 0:00:02.362 ********* 2025-06-19 10:46:15.847123 | orchestrator | =============================================================================== 2025-06-19 10:46:15.847144 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.96s 2025-06-19 10:46:15.847165 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.76s 2025-06-19 10:46:15.847187 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-06-19 10:46:15.847239 | orchestrator | 2025-06-19 10:46:15.847252 | orchestrator | 2025-06-19 10:46:15.847265 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:46:15.847277 | orchestrator | 2025-06-19 10:46:15.847289 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:46:15.847301 | orchestrator | Thursday 19 June 2025 10:44:19 +0000 (0:00:00.197) 0:00:00.197 ********* 2025-06-19 10:46:15.847313 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:46:15.847324 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:46:15.847336 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:46:15.847348 | orchestrator | 2025-06-19 10:46:15.847360 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:46:15.847372 | orchestrator | Thursday 19 June 2025 10:44:19 +0000 (0:00:00.222) 0:00:00.420 ********* 2025-06-19 10:46:15.847384 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-19 10:46:15.847397 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-19 10:46:15.847409 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-19 10:46:15.847421 | orchestrator | 2025-06-19 10:46:15.847432 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-19 10:46:15.847455 | orchestrator | 2025-06-19 10:46:15.847468 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-19 10:46:15.847480 | orchestrator | Thursday 19 June 2025 10:44:20 +0000 (0:00:00.319) 0:00:00.740 ********* 2025-06-19 10:46:15.847493 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:46:15.847505 | orchestrator | 2025-06-19 10:46:15.847517 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-19 10:46:15.847529 | orchestrator | Thursday 19 June 2025 10:44:20 +0000 (0:00:00.479) 0:00:01.220 ********* 2025-06-19 10:46:15.847542 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-19 10:46:15.847552 | orchestrator | 2025-06-19 10:46:15.847563 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-19 10:46:15.847574 | orchestrator | Thursday 19 June 2025 10:44:23 +0000 (0:00:03.067) 0:00:04.287 ********* 2025-06-19 10:46:15.847590 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-19 10:46:15.847609 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-19 10:46:15.847627 | orchestrator | 2025-06-19 10:46:15.847644 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-19 10:46:15.847661 | orchestrator | Thursday 19 June 2025 10:44:29 +0000 (0:00:05.784) 0:00:10.071 ********* 2025-06-19 10:46:15.847690 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-19 10:46:15.847710 | orchestrator | 2025-06-19 10:46:15.847731 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-19 10:46:15.847749 | orchestrator | Thursday 19 June 2025 10:44:32 +0000 (0:00:03.010) 0:00:13.082 ********* 2025-06-19 10:46:15.847782 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-19 10:46:15.847793 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-19 10:46:15.847805 | orchestrator | 2025-06-19 10:46:15.847815 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-19 10:46:15.847826 | orchestrator | Thursday 19 June 2025 10:44:36 +0000 (0:00:03.609) 0:00:16.691 ********* 2025-06-19 10:46:15.847837 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-19 10:46:15.847848 | orchestrator | 2025-06-19 10:46:15.847858 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-19 10:46:15.847869 | orchestrator | Thursday 19 June 2025 10:44:39 +0000 (0:00:03.585) 0:00:20.277 ********* 2025-06-19 10:46:15.847880 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-19 10:46:15.847891 | orchestrator | 2025-06-19 10:46:15.847901 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-19 10:46:15.847912 | orchestrator | Thursday 19 June 2025 10:44:44 +0000 (0:00:04.500) 0:00:24.778 ********* 2025-06-19 10:46:15.847923 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:15.847934 | orchestrator | 2025-06-19 10:46:15.847944 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-19 10:46:15.847955 | orchestrator | Thursday 19 June 2025 10:44:47 +0000 (0:00:03.135) 0:00:27.913 ********* 2025-06-19 10:46:15.847966 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:15.847976 | orchestrator | 2025-06-19 10:46:15.847987 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-19 10:46:15.847998 | orchestrator | Thursday 19 June 2025 10:44:51 +0000 (0:00:04.271) 0:00:32.185 ********* 2025-06-19 10:46:15.848009 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:15.848019 | orchestrator | 2025-06-19 10:46:15.848030 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-19 10:46:15.848041 | orchestrator | Thursday 19 June 2025 10:44:55 +0000 (0:00:03.804) 0:00:35.989 ********* 2025-06-19 10:46:15.848055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.848236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.848322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.848353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.848366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.848394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.848405 | orchestrator | 2025-06-19 10:46:15.848417 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-19 10:46:15.848428 | orchestrator | Thursday 19 June 2025 10:44:57 +0000 (0:00:01.642) 0:00:37.631 ********* 2025-06-19 10:46:15.848439 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:15.848450 | orchestrator | 2025-06-19 10:46:15.848460 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-19 10:46:15.848471 | orchestrator | Thursday 19 June 2025 10:44:57 +0000 (0:00:00.202) 0:00:37.834 ********* 2025-06-19 10:46:15.848482 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:15.848493 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:15.848503 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:15.848514 | orchestrator | 2025-06-19 10:46:15.848525 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-19 10:46:15.848536 | orchestrator | Thursday 19 June 2025 10:44:57 +0000 (0:00:00.574) 0:00:38.408 ********* 2025-06-19 10:46:15.848546 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:46:15.848557 | orchestrator | 2025-06-19 10:46:15.848568 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-19 10:46:15.848578 | orchestrator | Thursday 19 June 2025 10:44:59 +0000 (0:00:01.295) 0:00:39.704 ********* 2025-06-19 10:46:15.848590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.848616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.848628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.848647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.848659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.848670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.848681 | orchestrator | 2025-06-19 10:46:15.848693 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-19 10:46:15.848708 | orchestrator | Thursday 19 June 2025 10:45:01 +0000 (0:00:02.621) 0:00:42.325 ********* 2025-06-19 10:46:15.848719 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:46:15.848731 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:46:15.848742 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:46:15.848752 | orchestrator | 2025-06-19 10:46:15.848763 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-19 10:46:15.848781 | orchestrator | Thursday 19 June 2025 10:45:02 +0000 (0:00:00.318) 0:00:42.644 ********* 2025-06-19 10:46:15.848792 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:46:15.848803 | orchestrator | 2025-06-19 10:46:15.848814 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-19 10:46:15.848825 | orchestrator | Thursday 19 June 2025 10:45:02 +0000 (0:00:00.749) 0:00:43.394 ********* 2025-06-19 10:46:15.848843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.848855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.848866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.848878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.848901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.848919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.848931 | orchestrator | 2025-06-19 10:46:15.848942 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-19 10:46:15.848952 | orchestrator | Thursday 19 June 2025 10:45:05 +0000 (0:00:02.416) 0:00:45.810 ********* 2025-06-19 10:46:15.848966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:46:15.848979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:46:15.848991 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:15.849009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:46:15.849032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:46:15.849058 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:15.849072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:46:15.849085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:46:15.849098 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:15.849110 | orchestrator | 2025-06-19 10:46:15.849122 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-19 10:46:15.849134 | orchestrator | Thursday 19 June 2025 10:45:05 +0000 (0:00:00.618) 0:00:46.429 ********* 2025-06-19 10:46:15.849147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:46:15.849165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:46:15.849184 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:15.849256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:46:15.849271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:46:15.849284 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:15.849297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:46:15.849311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:46:15.849323 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:15.849334 | orchestrator | 2025-06-19 10:46:15.849345 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-19 10:46:15.849363 | orchestrator | Thursday 19 June 2025 10:45:07 +0000 (0:00:01.284) 0:00:47.713 ********* 2025-06-19 10:46:15.849387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.849399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.849412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.849423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.849435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.849465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.849477 | orchestrator | 2025-06-19 10:46:15.849488 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-19 10:46:15.849499 | orchestrator | Thursday 19 June 2025 10:45:09 +0000 (0:00:02.277) 0:00:49.991 ********* 2025-06-19 10:46:15.849510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.849522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.849534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.849551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.849575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.849587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.849598 | orchestrator | 2025-06-19 10:46:15.849609 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-19 10:46:15.849620 | orchestrator | Thursday 19 June 2025 10:45:14 +0000 (0:00:05.026) 0:00:55.018 ********* 2025-06-19 10:46:15.849631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:46:15.849643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:46:15.849661 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:15.849677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:46:15.849696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:46:15.849707 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:15.849718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-19 10:46:15.849730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:46:15.849741 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:15.849752 | orchestrator | 2025-06-19 10:46:15.849763 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-19 10:46:15.849774 | orchestrator | Thursday 19 June 2025 10:45:15 +0000 (0:00:00.661) 0:00:55.680 ********* 2025-06-19 10:46:15.849785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.849813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.849825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-19 10:46:15.849837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.849848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.849866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:46:15.849877 | orchestrator | 2025-06-19 10:46:15.849888 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-19 10:46:15.849899 | orchestrator | Thursday 19 June 2025 10:45:17 +0000 (0:00:02.128) 0:00:57.809 ********* 2025-06-19 10:46:15.849910 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:15.849921 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:15.849932 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:15.849943 | orchestrator | 2025-06-19 10:46:15.849953 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-19 10:46:15.849964 | orchestrator | Thursday 19 June 2025 10:45:17 +0000 (0:00:00.314) 0:00:58.123 ********* 2025-06-19 10:46:15.849975 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:15.849986 | orchestrator | 2025-06-19 10:46:15.849997 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-19 10:46:15.850011 | orchestrator | Thursday 19 June 2025 10:45:19 +0000 (0:00:02.189) 0:01:00.313 ********* 2025-06-19 10:46:15.850057 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:15.850069 | orchestrator | 2025-06-19 10:46:15.850079 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-19 10:46:15.850090 | orchestrator | Thursday 19 June 2025 10:45:22 +0000 (0:00:02.299) 0:01:02.612 ********* 2025-06-19 10:46:15.850107 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:15.850119 | orchestrator | 2025-06-19 10:46:15.850130 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-19 10:46:15.850140 | orchestrator | Thursday 19 June 2025 10:45:47 +0000 (0:00:25.541) 0:01:28.153 ********* 2025-06-19 10:46:15.850151 | orchestrator | 2025-06-19 10:46:15.850162 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-19 10:46:15.850173 | orchestrator | Thursday 19 June 2025 10:45:47 +0000 (0:00:00.068) 0:01:28.222 ********* 2025-06-19 10:46:15.850183 | orchestrator | 2025-06-19 10:46:15.850212 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-19 10:46:15.850224 | orchestrator | Thursday 19 June 2025 10:45:47 +0000 (0:00:00.070) 0:01:28.292 ********* 2025-06-19 10:46:15.850235 | orchestrator | 2025-06-19 10:46:15.850245 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-19 10:46:15.850256 | orchestrator | Thursday 19 June 2025 10:45:47 +0000 (0:00:00.071) 0:01:28.364 ********* 2025-06-19 10:46:15.850267 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:15.850277 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:46:15.850288 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:46:15.850299 | orchestrator | 2025-06-19 10:46:15.850310 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-19 10:46:15.850320 | orchestrator | Thursday 19 June 2025 10:46:02 +0000 (0:00:14.701) 0:01:43.065 ********* 2025-06-19 10:46:15.850331 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:46:15.850342 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:15.850353 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:46:15.850363 | orchestrator | 2025-06-19 10:46:15.850374 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:46:15.850386 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-19 10:46:15.850408 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:46:15.850419 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:46:15.850430 | orchestrator | 2025-06-19 10:46:15.850440 | orchestrator | 2025-06-19 10:46:15.850451 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:46:15.850462 | orchestrator | Thursday 19 June 2025 10:46:12 +0000 (0:00:10.391) 0:01:53.456 ********* 2025-06-19 10:46:15.850473 | orchestrator | =============================================================================== 2025-06-19 10:46:15.850483 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 25.54s 2025-06-19 10:46:15.850494 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 14.70s 2025-06-19 10:46:15.850505 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.39s 2025-06-19 10:46:15.850515 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 5.78s 2025-06-19 10:46:15.850526 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.03s 2025-06-19 10:46:15.850537 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.50s 2025-06-19 10:46:15.850547 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.27s 2025-06-19 10:46:15.850558 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.80s 2025-06-19 10:46:15.850569 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.61s 2025-06-19 10:46:15.850580 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.59s 2025-06-19 10:46:15.850590 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.14s 2025-06-19 10:46:15.850601 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.07s 2025-06-19 10:46:15.850611 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.01s 2025-06-19 10:46:15.850622 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.62s 2025-06-19 10:46:15.850633 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.42s 2025-06-19 10:46:15.850643 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.30s 2025-06-19 10:46:15.850654 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.28s 2025-06-19 10:46:15.850665 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.19s 2025-06-19 10:46:15.850675 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.13s 2025-06-19 10:46:15.850686 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.64s 2025-06-19 10:46:15.850697 | orchestrator | 2025-06-19 10:46:15 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:15.850708 | orchestrator | 2025-06-19 10:46:15 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:15.850718 | orchestrator | 2025-06-19 10:46:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:18.890371 | orchestrator | 2025-06-19 10:46:18 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:18.890466 | orchestrator | 2025-06-19 10:46:18 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:18.891184 | orchestrator | 2025-06-19 10:46:18 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:18.891233 | orchestrator | 2025-06-19 10:46:18 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:21.927927 | orchestrator | 2025-06-19 10:46:21 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:21.928304 | orchestrator | 2025-06-19 10:46:21 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:21.929388 | orchestrator | 2025-06-19 10:46:21 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:21.929415 | orchestrator | 2025-06-19 10:46:21 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:24.967067 | orchestrator | 2025-06-19 10:46:24 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:24.969421 | orchestrator | 2025-06-19 10:46:24 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:24.969459 | orchestrator | 2025-06-19 10:46:24 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:24.969471 | orchestrator | 2025-06-19 10:46:24 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:28.010428 | orchestrator | 2025-06-19 10:46:28 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:28.010539 | orchestrator | 2025-06-19 10:46:28 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:28.011294 | orchestrator | 2025-06-19 10:46:28 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:28.011363 | orchestrator | 2025-06-19 10:46:28 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:31.051733 | orchestrator | 2025-06-19 10:46:31 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:31.053660 | orchestrator | 2025-06-19 10:46:31 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:31.054912 | orchestrator | 2025-06-19 10:46:31 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:31.055027 | orchestrator | 2025-06-19 10:46:31 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:34.114894 | orchestrator | 2025-06-19 10:46:34 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:34.116071 | orchestrator | 2025-06-19 10:46:34 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:34.117376 | orchestrator | 2025-06-19 10:46:34 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:34.117404 | orchestrator | 2025-06-19 10:46:34 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:37.155695 | orchestrator | 2025-06-19 10:46:37 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:37.158632 | orchestrator | 2025-06-19 10:46:37 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:37.162984 | orchestrator | 2025-06-19 10:46:37 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:37.163322 | orchestrator | 2025-06-19 10:46:37 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:40.211482 | orchestrator | 2025-06-19 10:46:40 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:40.213069 | orchestrator | 2025-06-19 10:46:40 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:40.214255 | orchestrator | 2025-06-19 10:46:40 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:40.214285 | orchestrator | 2025-06-19 10:46:40 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:43.257636 | orchestrator | 2025-06-19 10:46:43 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:43.260025 | orchestrator | 2025-06-19 10:46:43 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:43.262789 | orchestrator | 2025-06-19 10:46:43 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:43.262814 | orchestrator | 2025-06-19 10:46:43 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:46.311512 | orchestrator | 2025-06-19 10:46:46 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:46.312228 | orchestrator | 2025-06-19 10:46:46 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:46.313377 | orchestrator | 2025-06-19 10:46:46 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:46.313408 | orchestrator | 2025-06-19 10:46:46 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:49.363300 | orchestrator | 2025-06-19 10:46:49 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:49.365084 | orchestrator | 2025-06-19 10:46:49 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:49.366387 | orchestrator | 2025-06-19 10:46:49 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:49.367092 | orchestrator | 2025-06-19 10:46:49 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:52.405564 | orchestrator | 2025-06-19 10:46:52 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state STARTED 2025-06-19 10:46:52.406648 | orchestrator | 2025-06-19 10:46:52 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:52.409632 | orchestrator | 2025-06-19 10:46:52 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:52.409658 | orchestrator | 2025-06-19 10:46:52 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:55.455515 | orchestrator | 2025-06-19 10:46:55 | INFO  | Task 98fb0674-e637-4364-af14-b475d39ce587 is in state SUCCESS 2025-06-19 10:46:55.457486 | orchestrator | 2025-06-19 10:46:55.457555 | orchestrator | 2025-06-19 10:46:55.457570 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:46:55.457583 | orchestrator | 2025-06-19 10:46:55.457596 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:46:55.457607 | orchestrator | Thursday 19 June 2025 10:44:42 +0000 (0:00:00.270) 0:00:00.270 ********* 2025-06-19 10:46:55.457618 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:46:55.457631 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:46:55.457642 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:46:55.457653 | orchestrator | 2025-06-19 10:46:55.457664 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:46:55.457675 | orchestrator | Thursday 19 June 2025 10:44:42 +0000 (0:00:00.285) 0:00:00.556 ********* 2025-06-19 10:46:55.457702 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-19 10:46:55.457714 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-19 10:46:55.457724 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-19 10:46:55.457735 | orchestrator | 2025-06-19 10:46:55.457746 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-19 10:46:55.457757 | orchestrator | 2025-06-19 10:46:55.457768 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-19 10:46:55.457779 | orchestrator | Thursday 19 June 2025 10:44:42 +0000 (0:00:00.435) 0:00:00.992 ********* 2025-06-19 10:46:55.457790 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:46:55.457801 | orchestrator | 2025-06-19 10:46:55.457812 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-19 10:46:55.457854 | orchestrator | Thursday 19 June 2025 10:44:43 +0000 (0:00:00.534) 0:00:01.526 ********* 2025-06-19 10:46:55.457869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.457885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.457911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.457923 | orchestrator | 2025-06-19 10:46:55.457934 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-19 10:46:55.457945 | orchestrator | Thursday 19 June 2025 10:44:44 +0000 (0:00:00.769) 0:00:02.295 ********* 2025-06-19 10:46:55.457956 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-19 10:46:55.457968 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-19 10:46:55.457979 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:46:55.457990 | orchestrator | 2025-06-19 10:46:55.458001 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-19 10:46:55.459100 | orchestrator | Thursday 19 June 2025 10:44:45 +0000 (0:00:00.887) 0:00:03.185 ********* 2025-06-19 10:46:55.459144 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:46:55.459156 | orchestrator | 2025-06-19 10:46:55.459190 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-19 10:46:55.459203 | orchestrator | Thursday 19 June 2025 10:44:45 +0000 (0:00:00.744) 0:00:03.929 ********* 2025-06-19 10:46:55.459268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.459300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.459314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.459327 | orchestrator | 2025-06-19 10:46:55.459339 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-19 10:46:55.459351 | orchestrator | Thursday 19 June 2025 10:44:47 +0000 (0:00:01.365) 0:00:05.295 ********* 2025-06-19 10:46:55.459373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-19 10:46:55.459387 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:55.459400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-19 10:46:55.459412 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:55.459457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-19 10:46:55.459472 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:55.459484 | orchestrator | 2025-06-19 10:46:55.459514 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-19 10:46:55.459547 | orchestrator | Thursday 19 June 2025 10:44:47 +0000 (0:00:00.408) 0:00:05.704 ********* 2025-06-19 10:46:55.459561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-19 10:46:55.459574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-19 10:46:55.459660 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:55.459674 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:55.459687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-19 10:46:55.459699 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:55.459712 | orchestrator | 2025-06-19 10:46:55.459724 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-19 10:46:55.459736 | orchestrator | Thursday 19 June 2025 10:44:48 +0000 (0:00:00.797) 0:00:06.502 ********* 2025-06-19 10:46:55.459755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.459769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.459823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.459837 | orchestrator | 2025-06-19 10:46:55.459850 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-19 10:46:55.459862 | orchestrator | Thursday 19 June 2025 10:44:49 +0000 (0:00:01.442) 0:00:07.945 ********* 2025-06-19 10:46:55.459874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.459888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.459901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.459913 | orchestrator | 2025-06-19 10:46:55.459930 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-19 10:46:55.459942 | orchestrator | Thursday 19 June 2025 10:44:51 +0000 (0:00:01.516) 0:00:09.461 ********* 2025-06-19 10:46:55.459954 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:55.459966 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:55.459978 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:55.459990 | orchestrator | 2025-06-19 10:46:55.460002 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-19 10:46:55.460014 | orchestrator | Thursday 19 June 2025 10:44:52 +0000 (0:00:00.704) 0:00:10.166 ********* 2025-06-19 10:46:55.460026 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-19 10:46:55.460039 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-19 10:46:55.460051 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-19 10:46:55.460071 | orchestrator | 2025-06-19 10:46:55.460083 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-19 10:46:55.460095 | orchestrator | Thursday 19 June 2025 10:44:53 +0000 (0:00:01.353) 0:00:11.520 ********* 2025-06-19 10:46:55.460107 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-19 10:46:55.460119 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-19 10:46:55.460131 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-19 10:46:55.460143 | orchestrator | 2025-06-19 10:46:55.460155 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-19 10:46:55.460188 | orchestrator | Thursday 19 June 2025 10:44:54 +0000 (0:00:01.380) 0:00:12.900 ********* 2025-06-19 10:46:55.460232 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:46:55.460246 | orchestrator | 2025-06-19 10:46:55.460259 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-19 10:46:55.460271 | orchestrator | Thursday 19 June 2025 10:44:55 +0000 (0:00:00.863) 0:00:13.764 ********* 2025-06-19 10:46:55.460283 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-19 10:46:55.460296 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-19 10:46:55.460308 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:46:55.460320 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:46:55.460332 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:46:55.460343 | orchestrator | 2025-06-19 10:46:55.460355 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-19 10:46:55.460367 | orchestrator | Thursday 19 June 2025 10:44:56 +0000 (0:00:01.044) 0:00:14.808 ********* 2025-06-19 10:46:55.460379 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:55.460391 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:55.460402 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:55.460414 | orchestrator | 2025-06-19 10:46:55.460426 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-19 10:46:55.460438 | orchestrator | Thursday 19 June 2025 10:44:57 +0000 (0:00:00.571) 0:00:15.380 ********* 2025-06-19 10:46:55.460451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 846018, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9235163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 846018, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9235163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 846018, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9235163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 846007, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9205163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460549 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 846007, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9205163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 846007, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9205163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 846002, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9175162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 846002, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9175162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 846002, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9175162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 846016, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9215164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 846016, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9215164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 846016, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9215164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 845992, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9145162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 845992, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9145162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 845992, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9145162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 846004, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9185164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 846004, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9185164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 846004, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9185164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 846013, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9215164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 846013, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9215164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 846013, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9215164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 845990, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9145162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 845990, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9145162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 845990, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9145162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 845973, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.909516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 845973, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.909516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 845973, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.909516, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 845993, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9155164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 845993, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9155164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.460992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 845993, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9155164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 845979, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9115162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 845979, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9115162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 845979, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9115162, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 846011, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9205163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 846011, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9205163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 846011, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9205163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 845996, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9165163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 845996, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9165163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 845996, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9165163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 846017, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9225163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 846017, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9225163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 846017, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9225163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 845986, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9135163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 845986, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9135163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 845986, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9135163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 846005, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9185164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 846005, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9185164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 846005, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9185164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 845974, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9105163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 845974, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9105163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 845974, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9105163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 845981, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9135163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 845981, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9135163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 845981, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9135163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 845999, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9165163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 845999, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9165163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 845999, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9165163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 846103, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9455166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 846103, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9455166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 846103, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9455166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 846092, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9375165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 846092, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9375165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 846092, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9375165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 846023, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9245164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 846023, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9245164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 846023, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9245164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 846143, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9525168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 846143, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9525168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 846143, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9525168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 846032, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9255164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 846032, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9255164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 846032, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9255164, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 846140, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9495168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 846140, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9495168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 846140, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9495168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 846149, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9555168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 846149, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9555168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 846149, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9555168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 846130, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9465168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 846130, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9465168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 846130, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9465168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 846137, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9485166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 846137, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9485166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 846137, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9485166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 846035, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9265165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 846035, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9265165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 846035, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9265165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 846095, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9375165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 846095, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9375165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 846095, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9375165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 846155, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9565167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 846155, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9565167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 846155, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9565167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 846141, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9505167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 846141, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9505167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 846141, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9505167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.461992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 846049, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9305165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 846049, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9305165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 846049, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9305165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 846044, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9275165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 846044, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9275165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 846044, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9275165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 846063, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9325166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 846063, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9325166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 846063, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9325166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 846071, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9365165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 846071, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9365165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 846071, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9365165, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 846098, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9385166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 846098, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9385166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 846098, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9385166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 846135, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9475167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 846135, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9475167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 846135, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9475167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 846101, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9385166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 846101, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9385166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 846101, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9385166, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 846160, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9585168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 846160, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9585168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 846160, 'dev': 111, 'nlink': 1, 'atime': 1750291393.0, 'mtime': 1750291393.0, 'ctime': 1750327055.9585168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-19 10:46:55.462405 | orchestrator | 2025-06-19 10:46:55.462416 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-19 10:46:55.462427 | orchestrator | Thursday 19 June 2025 10:45:34 +0000 (0:00:36.737) 0:00:52.117 ********* 2025-06-19 10:46:55.462438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.462449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.462465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-19 10:46:55.462483 | orchestrator | 2025-06-19 10:46:55.462494 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-19 10:46:55.462505 | orchestrator | Thursday 19 June 2025 10:45:35 +0000 (0:00:01.176) 0:00:53.294 ********* 2025-06-19 10:46:55.462516 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:55.462526 | orchestrator | 2025-06-19 10:46:55.462537 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-19 10:46:55.462548 | orchestrator | Thursday 19 June 2025 10:45:37 +0000 (0:00:02.565) 0:00:55.859 ********* 2025-06-19 10:46:55.462558 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:55.462569 | orchestrator | 2025-06-19 10:46:55.462579 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-19 10:46:55.462590 | orchestrator | Thursday 19 June 2025 10:45:39 +0000 (0:00:02.165) 0:00:58.025 ********* 2025-06-19 10:46:55.462601 | orchestrator | 2025-06-19 10:46:55.462611 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-19 10:46:55.462622 | orchestrator | Thursday 19 June 2025 10:45:40 +0000 (0:00:00.071) 0:00:58.096 ********* 2025-06-19 10:46:55.462633 | orchestrator | 2025-06-19 10:46:55.462648 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-19 10:46:55.462659 | orchestrator | Thursday 19 June 2025 10:45:40 +0000 (0:00:00.212) 0:00:58.309 ********* 2025-06-19 10:46:55.462670 | orchestrator | 2025-06-19 10:46:55.462681 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-19 10:46:55.462691 | orchestrator | Thursday 19 June 2025 10:45:40 +0000 (0:00:00.066) 0:00:58.375 ********* 2025-06-19 10:46:55.462702 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:55.462713 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:55.462723 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:46:55.462734 | orchestrator | 2025-06-19 10:46:55.462745 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-19 10:46:55.462755 | orchestrator | Thursday 19 June 2025 10:45:42 +0000 (0:00:01.824) 0:01:00.200 ********* 2025-06-19 10:46:55.462766 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:55.462777 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:55.462787 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-19 10:46:55.462798 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-19 10:46:55.462809 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-19 10:46:55.462820 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:46:55.462831 | orchestrator | 2025-06-19 10:46:55.462841 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-19 10:46:55.462852 | orchestrator | Thursday 19 June 2025 10:46:21 +0000 (0:00:38.916) 0:01:39.116 ********* 2025-06-19 10:46:55.462862 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:55.462873 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:46:55.462883 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:46:55.462894 | orchestrator | 2025-06-19 10:46:55.462905 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-19 10:46:55.462915 | orchestrator | Thursday 19 June 2025 10:46:48 +0000 (0:00:27.695) 0:02:06.812 ********* 2025-06-19 10:46:55.462926 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:46:55.462936 | orchestrator | 2025-06-19 10:46:55.462947 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-19 10:46:55.462958 | orchestrator | Thursday 19 June 2025 10:46:51 +0000 (0:00:02.359) 0:02:09.172 ********* 2025-06-19 10:46:55.462968 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:55.462979 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:46:55.462995 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:46:55.463006 | orchestrator | 2025-06-19 10:46:55.463017 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-19 10:46:55.463028 | orchestrator | Thursday 19 June 2025 10:46:51 +0000 (0:00:00.488) 0:02:09.661 ********* 2025-06-19 10:46:55.463040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-19 10:46:55.463054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-19 10:46:55.463065 | orchestrator | 2025-06-19 10:46:55.463076 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-19 10:46:55.463091 | orchestrator | Thursday 19 June 2025 10:46:53 +0000 (0:00:02.343) 0:02:12.004 ********* 2025-06-19 10:46:55.463102 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:46:55.463113 | orchestrator | 2025-06-19 10:46:55.463123 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:46:55.463135 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-19 10:46:55.463146 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-19 10:46:55.463157 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-19 10:46:55.463182 | orchestrator | 2025-06-19 10:46:55.463194 | orchestrator | 2025-06-19 10:46:55.463204 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:46:55.463215 | orchestrator | Thursday 19 June 2025 10:46:54 +0000 (0:00:00.263) 0:02:12.267 ********* 2025-06-19 10:46:55.463225 | orchestrator | =============================================================================== 2025-06-19 10:46:55.463236 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.92s 2025-06-19 10:46:55.463247 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.74s 2025-06-19 10:46:55.463257 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 27.70s 2025-06-19 10:46:55.463268 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.57s 2025-06-19 10:46:55.463279 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.36s 2025-06-19 10:46:55.463289 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.34s 2025-06-19 10:46:55.463305 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.17s 2025-06-19 10:46:55.463316 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.82s 2025-06-19 10:46:55.463327 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.52s 2025-06-19 10:46:55.463337 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.44s 2025-06-19 10:46:55.463348 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.38s 2025-06-19 10:46:55.463358 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.37s 2025-06-19 10:46:55.463369 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.35s 2025-06-19 10:46:55.463380 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.18s 2025-06-19 10:46:55.463390 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 1.04s 2025-06-19 10:46:55.463408 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.89s 2025-06-19 10:46:55.463419 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.86s 2025-06-19 10:46:55.463429 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.80s 2025-06-19 10:46:55.463440 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.77s 2025-06-19 10:46:55.463451 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.74s 2025-06-19 10:46:55.463461 | orchestrator | 2025-06-19 10:46:55 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:55.463472 | orchestrator | 2025-06-19 10:46:55 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:55.463483 | orchestrator | 2025-06-19 10:46:55 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:46:58.501122 | orchestrator | 2025-06-19 10:46:58 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:46:58.502633 | orchestrator | 2025-06-19 10:46:58 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:46:58.503602 | orchestrator | 2025-06-19 10:46:58 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:01.544649 | orchestrator | 2025-06-19 10:47:01 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:47:01.545583 | orchestrator | 2025-06-19 10:47:01 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:01.546405 | orchestrator | 2025-06-19 10:47:01 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:04.589362 | orchestrator | 2025-06-19 10:47:04 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state STARTED 2025-06-19 10:47:04.589473 | orchestrator | 2025-06-19 10:47:04 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:04.589488 | orchestrator | 2025-06-19 10:47:04 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:07.640735 | orchestrator | 2025-06-19 10:47:07.640837 | orchestrator | 2025-06-19 10:47:07.640852 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:47:07.640865 | orchestrator | 2025-06-19 10:47:07.640877 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-19 10:47:07.640888 | orchestrator | Thursday 19 June 2025 10:38:21 +0000 (0:00:00.867) 0:00:00.867 ********* 2025-06-19 10:47:07.640914 | orchestrator | changed: [testbed-manager] 2025-06-19 10:47:07.640927 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.640938 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:47:07.640949 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:47:07.640959 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.640970 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.640981 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.640992 | orchestrator | 2025-06-19 10:47:07.641003 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:47:07.641014 | orchestrator | Thursday 19 June 2025 10:38:22 +0000 (0:00:01.303) 0:00:02.171 ********* 2025-06-19 10:47:07.641025 | orchestrator | changed: [testbed-manager] 2025-06-19 10:47:07.641035 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.641046 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:47:07.641057 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:47:07.641067 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.641078 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.641089 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.641100 | orchestrator | 2025-06-19 10:47:07.641111 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:47:07.641122 | orchestrator | Thursday 19 June 2025 10:38:23 +0000 (0:00:00.856) 0:00:03.027 ********* 2025-06-19 10:47:07.641153 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-19 10:47:07.641189 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-19 10:47:07.641201 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-19 10:47:07.641213 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-19 10:47:07.641225 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-19 10:47:07.641918 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-19 10:47:07.641940 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-19 10:47:07.641951 | orchestrator | 2025-06-19 10:47:07.641962 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-19 10:47:07.641974 | orchestrator | 2025-06-19 10:47:07.641984 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-19 10:47:07.641996 | orchestrator | Thursday 19 June 2025 10:38:24 +0000 (0:00:00.993) 0:00:04.020 ********* 2025-06-19 10:47:07.642007 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:47:07.642066 | orchestrator | 2025-06-19 10:47:07.642080 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-19 10:47:07.642092 | orchestrator | Thursday 19 June 2025 10:38:25 +0000 (0:00:00.957) 0:00:04.978 ********* 2025-06-19 10:47:07.642103 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-19 10:47:07.642115 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-19 10:47:07.642156 | orchestrator | 2025-06-19 10:47:07.642188 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-19 10:47:07.642199 | orchestrator | Thursday 19 June 2025 10:38:29 +0000 (0:00:04.468) 0:00:09.447 ********* 2025-06-19 10:47:07.642210 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-19 10:47:07.642221 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-19 10:47:07.642232 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.642243 | orchestrator | 2025-06-19 10:47:07.642254 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-19 10:47:07.642265 | orchestrator | Thursday 19 June 2025 10:38:34 +0000 (0:00:04.679) 0:00:14.127 ********* 2025-06-19 10:47:07.642276 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.642287 | orchestrator | 2025-06-19 10:47:07.642298 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-19 10:47:07.642308 | orchestrator | Thursday 19 June 2025 10:38:35 +0000 (0:00:00.735) 0:00:14.862 ********* 2025-06-19 10:47:07.642319 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.642330 | orchestrator | 2025-06-19 10:47:07.642341 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-19 10:47:07.642352 | orchestrator | Thursday 19 June 2025 10:38:36 +0000 (0:00:01.510) 0:00:16.373 ********* 2025-06-19 10:47:07.642363 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.642374 | orchestrator | 2025-06-19 10:47:07.642385 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-19 10:47:07.642396 | orchestrator | Thursday 19 June 2025 10:38:39 +0000 (0:00:03.180) 0:00:19.554 ********* 2025-06-19 10:47:07.642406 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.642417 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.642428 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.642439 | orchestrator | 2025-06-19 10:47:07.642450 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-19 10:47:07.642461 | orchestrator | Thursday 19 June 2025 10:38:40 +0000 (0:00:00.252) 0:00:19.807 ********* 2025-06-19 10:47:07.642472 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:47:07.642483 | orchestrator | 2025-06-19 10:47:07.642493 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-19 10:47:07.642504 | orchestrator | Thursday 19 June 2025 10:39:10 +0000 (0:00:30.920) 0:00:50.728 ********* 2025-06-19 10:47:07.642515 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.643229 | orchestrator | 2025-06-19 10:47:07.643253 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-19 10:47:07.643264 | orchestrator | Thursday 19 June 2025 10:39:26 +0000 (0:00:15.396) 0:01:06.124 ********* 2025-06-19 10:47:07.643275 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:47:07.643286 | orchestrator | 2025-06-19 10:47:07.643297 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-19 10:47:07.643308 | orchestrator | Thursday 19 June 2025 10:39:38 +0000 (0:00:12.501) 0:01:18.626 ********* 2025-06-19 10:47:07.643417 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:47:07.643434 | orchestrator | 2025-06-19 10:47:07.643445 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-19 10:47:07.643457 | orchestrator | Thursday 19 June 2025 10:39:39 +0000 (0:00:01.052) 0:01:19.678 ********* 2025-06-19 10:47:07.643467 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.643478 | orchestrator | 2025-06-19 10:47:07.643499 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-19 10:47:07.643510 | orchestrator | Thursday 19 June 2025 10:39:40 +0000 (0:00:00.525) 0:01:20.203 ********* 2025-06-19 10:47:07.643521 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:47:07.643532 | orchestrator | 2025-06-19 10:47:07.643543 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-19 10:47:07.643554 | orchestrator | Thursday 19 June 2025 10:39:40 +0000 (0:00:00.532) 0:01:20.736 ********* 2025-06-19 10:47:07.643564 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:47:07.643575 | orchestrator | 2025-06-19 10:47:07.643586 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-19 10:47:07.643596 | orchestrator | Thursday 19 June 2025 10:39:58 +0000 (0:00:17.701) 0:01:38.438 ********* 2025-06-19 10:47:07.643607 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.643618 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.643628 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.643639 | orchestrator | 2025-06-19 10:47:07.643650 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-19 10:47:07.643660 | orchestrator | 2025-06-19 10:47:07.643671 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-19 10:47:07.643682 | orchestrator | Thursday 19 June 2025 10:39:58 +0000 (0:00:00.263) 0:01:38.701 ********* 2025-06-19 10:47:07.643692 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:47:07.643703 | orchestrator | 2025-06-19 10:47:07.643714 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-19 10:47:07.643725 | orchestrator | Thursday 19 June 2025 10:39:59 +0000 (0:00:00.500) 0:01:39.202 ********* 2025-06-19 10:47:07.643735 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.643746 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.643757 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.643767 | orchestrator | 2025-06-19 10:47:07.643778 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-19 10:47:07.643789 | orchestrator | Thursday 19 June 2025 10:40:01 +0000 (0:00:01.968) 0:01:41.171 ********* 2025-06-19 10:47:07.643799 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.643810 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.643821 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.643831 | orchestrator | 2025-06-19 10:47:07.643842 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-19 10:47:07.643875 | orchestrator | Thursday 19 June 2025 10:40:03 +0000 (0:00:02.177) 0:01:43.348 ********* 2025-06-19 10:47:07.643887 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.643898 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.643908 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.643919 | orchestrator | 2025-06-19 10:47:07.643929 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-19 10:47:07.643940 | orchestrator | Thursday 19 June 2025 10:40:03 +0000 (0:00:00.305) 0:01:43.653 ********* 2025-06-19 10:47:07.643961 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-19 10:47:07.643971 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.643982 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-19 10:47:07.643992 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644003 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-19 10:47:07.644014 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-19 10:47:07.644026 | orchestrator | 2025-06-19 10:47:07.644039 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-19 10:47:07.644051 | orchestrator | Thursday 19 June 2025 10:40:11 +0000 (0:00:07.932) 0:01:51.586 ********* 2025-06-19 10:47:07.644064 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.644076 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644088 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644100 | orchestrator | 2025-06-19 10:47:07.644112 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-19 10:47:07.644124 | orchestrator | Thursday 19 June 2025 10:40:12 +0000 (0:00:00.294) 0:01:51.881 ********* 2025-06-19 10:47:07.644137 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-19 10:47:07.644148 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.644199 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-19 10:47:07.644213 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644225 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-19 10:47:07.644237 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644249 | orchestrator | 2025-06-19 10:47:07.644262 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-19 10:47:07.644275 | orchestrator | Thursday 19 June 2025 10:40:13 +0000 (0:00:00.965) 0:01:52.846 ********* 2025-06-19 10:47:07.644287 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644299 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644312 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.644324 | orchestrator | 2025-06-19 10:47:07.644336 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-19 10:47:07.644349 | orchestrator | Thursday 19 June 2025 10:40:13 +0000 (0:00:00.691) 0:01:53.538 ********* 2025-06-19 10:47:07.644361 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644374 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644385 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.644395 | orchestrator | 2025-06-19 10:47:07.644406 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-19 10:47:07.644417 | orchestrator | Thursday 19 June 2025 10:40:14 +0000 (0:00:01.115) 0:01:54.653 ********* 2025-06-19 10:47:07.644427 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644438 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644539 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.644555 | orchestrator | 2025-06-19 10:47:07.644566 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-19 10:47:07.644577 | orchestrator | Thursday 19 June 2025 10:40:16 +0000 (0:00:02.088) 0:01:56.741 ********* 2025-06-19 10:47:07.644587 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644605 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644616 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:47:07.644627 | orchestrator | 2025-06-19 10:47:07.644637 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-19 10:47:07.644648 | orchestrator | Thursday 19 June 2025 10:40:38 +0000 (0:00:21.131) 0:02:17.873 ********* 2025-06-19 10:47:07.644659 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644669 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644680 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:47:07.644691 | orchestrator | 2025-06-19 10:47:07.644701 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-19 10:47:07.644720 | orchestrator | Thursday 19 June 2025 10:40:50 +0000 (0:00:11.972) 0:02:29.845 ********* 2025-06-19 10:47:07.644731 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:47:07.644742 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644752 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644763 | orchestrator | 2025-06-19 10:47:07.644774 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-19 10:47:07.644785 | orchestrator | Thursday 19 June 2025 10:40:51 +0000 (0:00:01.001) 0:02:30.846 ********* 2025-06-19 10:47:07.644795 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644806 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644816 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.644827 | orchestrator | 2025-06-19 10:47:07.644838 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-19 10:47:07.644848 | orchestrator | Thursday 19 June 2025 10:41:04 +0000 (0:00:13.009) 0:02:43.856 ********* 2025-06-19 10:47:07.644859 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.644870 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644880 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644891 | orchestrator | 2025-06-19 10:47:07.644902 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-19 10:47:07.644913 | orchestrator | Thursday 19 June 2025 10:41:06 +0000 (0:00:02.404) 0:02:46.261 ********* 2025-06-19 10:47:07.644923 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.644934 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.644944 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.644955 | orchestrator | 2025-06-19 10:47:07.644966 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-19 10:47:07.644977 | orchestrator | 2025-06-19 10:47:07.644987 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-19 10:47:07.644998 | orchestrator | Thursday 19 June 2025 10:41:06 +0000 (0:00:00.306) 0:02:46.567 ********* 2025-06-19 10:47:07.645009 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:47:07.645021 | orchestrator | 2025-06-19 10:47:07.645031 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-19 10:47:07.645042 | orchestrator | Thursday 19 June 2025 10:41:07 +0000 (0:00:00.396) 0:02:46.964 ********* 2025-06-19 10:47:07.645053 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-19 10:47:07.645063 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-19 10:47:07.645074 | orchestrator | 2025-06-19 10:47:07.645085 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-19 10:47:07.645095 | orchestrator | Thursday 19 June 2025 10:41:10 +0000 (0:00:03.259) 0:02:50.223 ********* 2025-06-19 10:47:07.645106 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-19 10:47:07.645119 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-19 10:47:07.645130 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-19 10:47:07.645140 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-19 10:47:07.645153 | orchestrator | 2025-06-19 10:47:07.645191 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-19 10:47:07.645211 | orchestrator | Thursday 19 June 2025 10:41:17 +0000 (0:00:06.739) 0:02:56.962 ********* 2025-06-19 10:47:07.645233 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-19 10:47:07.645251 | orchestrator | 2025-06-19 10:47:07.645268 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-19 10:47:07.645282 | orchestrator | Thursday 19 June 2025 10:41:21 +0000 (0:00:04.087) 0:03:01.050 ********* 2025-06-19 10:47:07.645294 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-19 10:47:07.645316 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-19 10:47:07.645329 | orchestrator | 2025-06-19 10:47:07.645341 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-19 10:47:07.645354 | orchestrator | Thursday 19 June 2025 10:41:25 +0000 (0:00:04.014) 0:03:05.065 ********* 2025-06-19 10:47:07.645366 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-19 10:47:07.645378 | orchestrator | 2025-06-19 10:47:07.645389 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-19 10:47:07.645400 | orchestrator | Thursday 19 June 2025 10:41:28 +0000 (0:00:03.393) 0:03:08.458 ********* 2025-06-19 10:47:07.645411 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-19 10:47:07.645421 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-19 10:47:07.645432 | orchestrator | 2025-06-19 10:47:07.645443 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-19 10:47:07.645535 | orchestrator | Thursday 19 June 2025 10:41:36 +0000 (0:00:07.546) 0:03:16.005 ********* 2025-06-19 10:47:07.645562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.645580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.645594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.645688 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.645712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.645724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.645735 | orchestrator | 2025-06-19 10:47:07.645749 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-19 10:47:07.645767 | orchestrator | Thursday 19 June 2025 10:41:37 +0000 (0:00:01.740) 0:03:17.746 ********* 2025-06-19 10:47:07.645786 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.645804 | orchestrator | 2025-06-19 10:47:07.645822 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-19 10:47:07.645836 | orchestrator | Thursday 19 June 2025 10:41:38 +0000 (0:00:00.203) 0:03:17.949 ********* 2025-06-19 10:47:07.645847 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.645857 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.645868 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.645879 | orchestrator | 2025-06-19 10:47:07.645890 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-19 10:47:07.645900 | orchestrator | Thursday 19 June 2025 10:41:39 +0000 (0:00:00.846) 0:03:18.795 ********* 2025-06-19 10:47:07.645911 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-19 10:47:07.645922 | orchestrator | 2025-06-19 10:47:07.645933 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-19 10:47:07.645943 | orchestrator | Thursday 19 June 2025 10:41:40 +0000 (0:00:01.044) 0:03:19.839 ********* 2025-06-19 10:47:07.645977 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.645998 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.646009 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.646065 | orchestrator | 2025-06-19 10:47:07.646077 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-19 10:47:07.646087 | orchestrator | Thursday 19 June 2025 10:41:40 +0000 (0:00:00.249) 0:03:20.089 ********* 2025-06-19 10:47:07.646098 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:47:07.646109 | orchestrator | 2025-06-19 10:47:07.646120 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-19 10:47:07.646131 | orchestrator | Thursday 19 June 2025 10:41:40 +0000 (0:00:00.411) 0:03:20.501 ********* 2025-06-19 10:47:07.646143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.646236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.646254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.646278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.646290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.646332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.646347 | orchestrator | 2025-06-19 10:47:07.646359 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-19 10:47:07.646372 | orchestrator | Thursday 19 June 2025 10:41:44 +0000 (0:00:03.347) 0:03:23.849 ********* 2025-06-19 10:47:07.646397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:47:07.646411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.646432 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.646446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:47:07.646460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.646473 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.646524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:47:07.646541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.646560 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.646574 | orchestrator | 2025-06-19 10:47:07.646587 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-19 10:47:07.646599 | orchestrator | Thursday 19 June 2025 10:41:45 +0000 (0:00:01.558) 0:03:25.407 ********* 2025-06-19 10:47:07.646612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:47:07.646626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.646639 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.646682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/',2025-06-19 10:47:07 | INFO  | Task 59c3907a-f570-4d9e-9ef7-1a42179efb84 is in state SUCCESS 2025-06-19 10:47:07.646703 | orchestrator | '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:47:07.646716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.646734 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.646746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:47:07.646757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.646768 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.646779 | orchestrator | 2025-06-19 10:47:07.646790 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-19 10:47:07.646801 | orchestrator | Thursday 19 June 2025 10:41:47 +0000 (0:00:01.501) 0:03:26.909 ********* 2025-06-19 10:47:07.646847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.646862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.646881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.646893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.646935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.646953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.646964 | orchestrator | 2025-06-19 10:47:07.646975 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-19 10:47:07.646992 | orchestrator | Thursday 19 June 2025 10:41:50 +0000 (0:00:03.803) 0:03:30.713 ********* 2025-06-19 10:47:07.647004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.647016 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.647064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.647078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.647097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.647108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.647119 | orchestrator | 2025-06-19 10:47:07.647130 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-19 10:47:07.647141 | orchestrator | Thursday 19 June 2025 10:41:58 +0000 (0:00:07.328) 0:03:38.041 ********* 2025-06-19 10:47:07.647152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:47:07.647262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.647277 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.647294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:47:07.647314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.647325 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.647337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-19 10:47:07.647348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.647360 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.647370 | orchestrator | 2025-06-19 10:47:07.647381 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-19 10:47:07.647392 | orchestrator | Thursday 19 June 2025 10:41:59 +0000 (0:00:01.316) 0:03:39.358 ********* 2025-06-19 10:47:07.647431 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.647444 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:47:07.647455 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:47:07.647472 | orchestrator | 2025-06-19 10:47:07.647483 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-19 10:47:07.647502 | orchestrator | Thursday 19 June 2025 10:42:01 +0000 (0:00:01.620) 0:03:40.978 ********* 2025-06-19 10:47:07.647513 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.647525 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.647536 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.647547 | orchestrator | 2025-06-19 10:47:07.647558 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-19 10:47:07.647569 | orchestrator | Thursday 19 June 2025 10:42:01 +0000 (0:00:00.392) 0:03:41.370 ********* 2025-06-19 10:47:07.647580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.647593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.647632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-19 10:47:07.647656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.647667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.647677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.647687 | orchestrator | 2025-06-19 10:47:07.647696 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-19 10:47:07.647706 | orchestrator | Thursday 19 June 2025 10:42:03 +0000 (0:00:01.861) 0:03:43.232 ********* 2025-06-19 10:47:07.647716 | orchestrator | 2025-06-19 10:47:07.647725 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-19 10:47:07.647735 | orchestrator | Thursday 19 June 2025 10:42:03 +0000 (0:00:00.140) 0:03:43.373 ********* 2025-06-19 10:47:07.647745 | orchestrator | 2025-06-19 10:47:07.647754 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-19 10:47:07.647764 | orchestrator | Thursday 19 June 2025 10:42:03 +0000 (0:00:00.120) 0:03:43.493 ********* 2025-06-19 10:47:07.647774 | orchestrator | 2025-06-19 10:47:07.647783 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-19 10:47:07.647793 | orchestrator | Thursday 19 June 2025 10:42:03 +0000 (0:00:00.123) 0:03:43.616 ********* 2025-06-19 10:47:07.647803 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.647812 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:47:07.647822 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:47:07.647831 | orchestrator | 2025-06-19 10:47:07.647841 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-19 10:47:07.647851 | orchestrator | Thursday 19 June 2025 10:42:20 +0000 (0:00:16.913) 0:04:00.530 ********* 2025-06-19 10:47:07.647861 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:47:07.647870 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:47:07.647880 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.647890 | orchestrator | 2025-06-19 10:47:07.647899 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-19 10:47:07.647909 | orchestrator | 2025-06-19 10:47:07.647919 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-19 10:47:07.647936 | orchestrator | Thursday 19 June 2025 10:42:29 +0000 (0:00:08.788) 0:04:09.318 ********* 2025-06-19 10:47:07.647946 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:47:07.647956 | orchestrator | 2025-06-19 10:47:07.647965 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-19 10:47:07.647975 | orchestrator | Thursday 19 June 2025 10:42:30 +0000 (0:00:01.030) 0:04:10.349 ********* 2025-06-19 10:47:07.647985 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.647994 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.648004 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.648013 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.648023 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.648033 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.648042 | orchestrator | 2025-06-19 10:47:07.648052 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-19 10:47:07.648062 | orchestrator | Thursday 19 June 2025 10:42:31 +0000 (0:00:00.507) 0:04:10.856 ********* 2025-06-19 10:47:07.648071 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.648081 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.648091 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.648101 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:47:07.648110 | orchestrator | 2025-06-19 10:47:07.648145 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-19 10:47:07.648157 | orchestrator | Thursday 19 June 2025 10:42:31 +0000 (0:00:00.895) 0:04:11.751 ********* 2025-06-19 10:47:07.648219 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-19 10:47:07.648235 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-19 10:47:07.648245 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-19 10:47:07.648255 | orchestrator | 2025-06-19 10:47:07.648264 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-19 10:47:07.648274 | orchestrator | Thursday 19 June 2025 10:42:32 +0000 (0:00:00.672) 0:04:12.424 ********* 2025-06-19 10:47:07.648283 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-19 10:47:07.648293 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-19 10:47:07.648302 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-19 10:47:07.648312 | orchestrator | 2025-06-19 10:47:07.648321 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-19 10:47:07.648331 | orchestrator | Thursday 19 June 2025 10:42:34 +0000 (0:00:01.405) 0:04:13.829 ********* 2025-06-19 10:47:07.648340 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-19 10:47:07.648350 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.648359 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-19 10:47:07.648369 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.648378 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-19 10:47:07.648399 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.648407 | orchestrator | 2025-06-19 10:47:07.648415 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-19 10:47:07.648422 | orchestrator | Thursday 19 June 2025 10:42:34 +0000 (0:00:00.632) 0:04:14.462 ********* 2025-06-19 10:47:07.648430 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-19 10:47:07.648438 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-19 10:47:07.648446 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.648453 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-19 10:47:07.648461 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-19 10:47:07.648476 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.648483 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-19 10:47:07.648491 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-19 10:47:07.648499 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-19 10:47:07.648507 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.648515 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-19 10:47:07.648523 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-19 10:47:07.648530 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-19 10:47:07.648538 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-19 10:47:07.648546 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-19 10:47:07.648554 | orchestrator | 2025-06-19 10:47:07.648561 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-19 10:47:07.648569 | orchestrator | Thursday 19 June 2025 10:42:36 +0000 (0:00:02.098) 0:04:16.561 ********* 2025-06-19 10:47:07.648577 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.648585 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.648592 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.648600 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.648608 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.648616 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.648623 | orchestrator | 2025-06-19 10:47:07.648631 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-19 10:47:07.648639 | orchestrator | Thursday 19 June 2025 10:42:37 +0000 (0:00:01.109) 0:04:17.670 ********* 2025-06-19 10:47:07.648647 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.648655 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.648662 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.648670 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.648678 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.648685 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.648693 | orchestrator | 2025-06-19 10:47:07.648701 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-19 10:47:07.648709 | orchestrator | Thursday 19 June 2025 10:42:39 +0000 (0:00:01.768) 0:04:19.438 ********* 2025-06-19 10:47:07.648743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648839 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648861 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.648972 | orchestrator | 2025-06-19 10:47:07.648980 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-19 10:47:07.648988 | orchestrator | Thursday 19 June 2025 10:42:42 +0000 (0:00:02.333) 0:04:21.772 ********* 2025-06-19 10:47:07.648996 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:47:07.649005 | orchestrator | 2025-06-19 10:47:07.649013 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-19 10:47:07.649021 | orchestrator | Thursday 19 June 2025 10:42:43 +0000 (0:00:01.093) 0:04:22.865 ********* 2025-06-19 10:47:07.649030 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649039 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649111 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649128 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.649262 | orchestrator | 2025-06-19 10:47:07.649270 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-19 10:47:07.649278 | orchestrator | Thursday 19 June 2025 10:42:47 +0000 (0:00:03.891) 0:04:26.756 ********* 2025-06-19 10:47:07.649316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.649326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.649335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649343 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.649351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.649360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.649401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649411 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.649420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-19 10:47:07.649428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-19 10:47:07.649445 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.649453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649461 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.649469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.649504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.649518 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649526 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.649534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-19 10:47:07.649542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649551 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.649559 | orchestrator | 2025-06-19 10:47:07.649567 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-19 10:47:07.649575 | orchestrator | Thursday 19 June 2025 10:42:48 +0000 (0:00:01.490) 0:04:28.247 ********* 2025-06-19 10:47:07.649583 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.649598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.649632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649642 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.649650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.649659 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.649667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.649681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.649720 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.649734 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649742 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.649750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-19 10:47:07.649758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649766 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.649775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-19 10:47:07.649788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649797 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.649825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-19 10:47:07.649839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.649847 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.649855 | orchestrator | 2025-06-19 10:47:07.649863 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-19 10:47:07.649871 | orchestrator | Thursday 19 June 2025 10:42:50 +0000 (0:00:02.401) 0:04:30.648 ********* 2025-06-19 10:47:07.649879 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.649887 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.649895 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.649903 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-19 10:47:07.649911 | orchestrator | 2025-06-19 10:47:07.649919 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-19 10:47:07.649927 | orchestrator | Thursday 19 June 2025 10:42:51 +0000 (0:00:00.988) 0:04:31.637 ********* 2025-06-19 10:47:07.649935 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-19 10:47:07.649943 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-19 10:47:07.649950 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-19 10:47:07.649958 | orchestrator | 2025-06-19 10:47:07.649966 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-19 10:47:07.649974 | orchestrator | Thursday 19 June 2025 10:42:52 +0000 (0:00:00.901) 0:04:32.539 ********* 2025-06-19 10:47:07.649982 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-19 10:47:07.649990 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-19 10:47:07.649998 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-19 10:47:07.650006 | orchestrator | 2025-06-19 10:47:07.650014 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-19 10:47:07.650064 | orchestrator | Thursday 19 June 2025 10:42:53 +0000 (0:00:01.188) 0:04:33.727 ********* 2025-06-19 10:47:07.650078 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:47:07.650086 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:47:07.650094 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:47:07.650101 | orchestrator | 2025-06-19 10:47:07.650109 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-19 10:47:07.650117 | orchestrator | Thursday 19 June 2025 10:42:54 +0000 (0:00:00.800) 0:04:34.528 ********* 2025-06-19 10:47:07.650125 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:47:07.650133 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:47:07.650140 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:47:07.650148 | orchestrator | 2025-06-19 10:47:07.650156 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-19 10:47:07.650184 | orchestrator | Thursday 19 June 2025 10:42:55 +0000 (0:00:00.471) 0:04:34.999 ********* 2025-06-19 10:47:07.650193 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-19 10:47:07.650200 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-19 10:47:07.650208 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-19 10:47:07.650216 | orchestrator | 2025-06-19 10:47:07.650224 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-19 10:47:07.650232 | orchestrator | Thursday 19 June 2025 10:42:56 +0000 (0:00:01.153) 0:04:36.153 ********* 2025-06-19 10:47:07.650240 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-19 10:47:07.650248 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-19 10:47:07.650255 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-19 10:47:07.650263 | orchestrator | 2025-06-19 10:47:07.650271 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-19 10:47:07.650279 | orchestrator | Thursday 19 June 2025 10:42:57 +0000 (0:00:01.164) 0:04:37.317 ********* 2025-06-19 10:47:07.650287 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-19 10:47:07.650294 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-19 10:47:07.650302 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-19 10:47:07.650310 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-19 10:47:07.650318 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-19 10:47:07.650325 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-19 10:47:07.650333 | orchestrator | 2025-06-19 10:47:07.650341 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-19 10:47:07.650349 | orchestrator | Thursday 19 June 2025 10:43:01 +0000 (0:00:04.403) 0:04:41.721 ********* 2025-06-19 10:47:07.650357 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.650364 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.650372 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.650380 | orchestrator | 2025-06-19 10:47:07.650387 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-19 10:47:07.650395 | orchestrator | Thursday 19 June 2025 10:43:02 +0000 (0:00:00.338) 0:04:42.059 ********* 2025-06-19 10:47:07.650403 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.650411 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.650418 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.650426 | orchestrator | 2025-06-19 10:47:07.650434 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-19 10:47:07.650442 | orchestrator | Thursday 19 June 2025 10:43:02 +0000 (0:00:00.302) 0:04:42.361 ********* 2025-06-19 10:47:07.650476 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.650485 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.650493 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.650501 | orchestrator | 2025-06-19 10:47:07.650509 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-19 10:47:07.650521 | orchestrator | Thursday 19 June 2025 10:43:04 +0000 (0:00:01.551) 0:04:43.912 ********* 2025-06-19 10:47:07.650537 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-19 10:47:07.650546 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-19 10:47:07.650554 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-19 10:47:07.650562 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-19 10:47:07.650570 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-19 10:47:07.650577 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-19 10:47:07.650585 | orchestrator | 2025-06-19 10:47:07.650593 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-19 10:47:07.650601 | orchestrator | Thursday 19 June 2025 10:43:07 +0000 (0:00:03.815) 0:04:47.728 ********* 2025-06-19 10:47:07.650609 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-19 10:47:07.650616 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-19 10:47:07.650624 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-19 10:47:07.650632 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-19 10:47:07.650639 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.650647 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-19 10:47:07.650654 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.650662 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-19 10:47:07.650670 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.650677 | orchestrator | 2025-06-19 10:47:07.650685 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-19 10:47:07.650693 | orchestrator | Thursday 19 June 2025 10:43:11 +0000 (0:00:03.322) 0:04:51.051 ********* 2025-06-19 10:47:07.650701 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.650708 | orchestrator | 2025-06-19 10:47:07.650716 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-19 10:47:07.650724 | orchestrator | Thursday 19 June 2025 10:43:11 +0000 (0:00:00.152) 0:04:51.204 ********* 2025-06-19 10:47:07.650731 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.650739 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.650747 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.650755 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.650762 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.650770 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.650778 | orchestrator | 2025-06-19 10:47:07.650786 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-19 10:47:07.650793 | orchestrator | Thursday 19 June 2025 10:43:12 +0000 (0:00:00.684) 0:04:51.888 ********* 2025-06-19 10:47:07.650801 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-19 10:47:07.650809 | orchestrator | 2025-06-19 10:47:07.650817 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-19 10:47:07.650824 | orchestrator | Thursday 19 June 2025 10:43:13 +0000 (0:00:01.417) 0:04:53.305 ********* 2025-06-19 10:47:07.650832 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.650840 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.650847 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.650855 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.650863 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.650870 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.650878 | orchestrator | 2025-06-19 10:47:07.650886 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-19 10:47:07.650894 | orchestrator | Thursday 19 June 2025 10:43:14 +0000 (0:00:00.737) 0:04:54.043 ********* 2025-06-19 10:47:07.650908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.650943 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.650954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.650962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.650970 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.650978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.650992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651033 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651068 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651089 | orchestrator | 2025-06-19 10:47:07.651097 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-19 10:47:07.651105 | orchestrator | Thursday 19 June 2025 10:43:19 +0000 (0:00:04.715) 0:04:58.758 ********* 2025-06-19 10:47:07.651113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.651122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.651135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.651143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.651210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.651222 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.651231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651240 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651257 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.651331 | orchestrator | 2025-06-19 10:47:07.651339 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-19 10:47:07.651347 | orchestrator | Thursday 19 June 2025 10:43:26 +0000 (0:00:07.952) 0:05:06.710 ********* 2025-06-19 10:47:07.651355 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.651363 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.651371 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.651378 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.651386 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.651394 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.651402 | orchestrator | 2025-06-19 10:47:07.651409 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-19 10:47:07.651417 | orchestrator | Thursday 19 June 2025 10:43:28 +0000 (0:00:01.624) 0:05:08.335 ********* 2025-06-19 10:47:07.651425 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-19 10:47:07.651433 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-19 10:47:07.651445 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-19 10:47:07.651453 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-19 10:47:07.651459 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-19 10:47:07.651466 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.651476 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-19 10:47:07.651483 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-19 10:47:07.651490 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.651496 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-19 10:47:07.651503 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.651510 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-19 10:47:07.651516 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-19 10:47:07.651523 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-19 10:47:07.651530 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-19 10:47:07.651536 | orchestrator | 2025-06-19 10:47:07.651543 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-19 10:47:07.651550 | orchestrator | Thursday 19 June 2025 10:43:31 +0000 (0:00:03.404) 0:05:11.740 ********* 2025-06-19 10:47:07.651561 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.651568 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.651575 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.651581 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.651588 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.651595 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.651601 | orchestrator | 2025-06-19 10:47:07.651608 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-19 10:47:07.651615 | orchestrator | Thursday 19 June 2025 10:43:32 +0000 (0:00:00.547) 0:05:12.287 ********* 2025-06-19 10:47:07.651621 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-19 10:47:07.651628 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-19 10:47:07.651635 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-19 10:47:07.651642 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-19 10:47:07.651648 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-19 10:47:07.651655 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-19 10:47:07.651662 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-19 10:47:07.651668 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-19 10:47:07.651675 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-19 10:47:07.651682 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-19 10:47:07.651688 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.651695 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-19 10:47:07.651701 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.651708 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-19 10:47:07.651715 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-19 10:47:07.651721 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.651728 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-19 10:47:07.651735 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-19 10:47:07.651741 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-19 10:47:07.651748 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-19 10:47:07.651754 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-19 10:47:07.651761 | orchestrator | 2025-06-19 10:47:07.651768 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-19 10:47:07.651775 | orchestrator | Thursday 19 June 2025 10:43:37 +0000 (0:00:05.257) 0:05:17.545 ********* 2025-06-19 10:47:07.651784 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-19 10:47:07.651791 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-19 10:47:07.651798 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-19 10:47:07.651814 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-19 10:47:07.651821 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-19 10:47:07.651828 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-19 10:47:07.651835 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-19 10:47:07.651841 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-19 10:47:07.651848 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-19 10:47:07.651854 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-19 10:47:07.651861 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-19 10:47:07.651868 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-19 10:47:07.651874 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.651881 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-19 10:47:07.651887 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-19 10:47:07.651894 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-19 10:47:07.651901 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-19 10:47:07.651907 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.651914 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-19 10:47:07.651921 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.651927 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-19 10:47:07.651934 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-19 10:47:07.651940 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-19 10:47:07.651947 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-19 10:47:07.651954 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-19 10:47:07.651960 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-19 10:47:07.651967 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-19 10:47:07.651974 | orchestrator | 2025-06-19 10:47:07.651980 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-19 10:47:07.651987 | orchestrator | Thursday 19 June 2025 10:43:45 +0000 (0:00:08.023) 0:05:25.569 ********* 2025-06-19 10:47:07.651994 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.652000 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.652007 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.652013 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.652020 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.652026 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.652033 | orchestrator | 2025-06-19 10:47:07.652040 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-19 10:47:07.652046 | orchestrator | Thursday 19 June 2025 10:43:46 +0000 (0:00:00.597) 0:05:26.166 ********* 2025-06-19 10:47:07.652053 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.652059 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.652066 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.652072 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.652079 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.652085 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.652096 | orchestrator | 2025-06-19 10:47:07.652103 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-19 10:47:07.652110 | orchestrator | Thursday 19 June 2025 10:43:46 +0000 (0:00:00.483) 0:05:26.650 ********* 2025-06-19 10:47:07.652117 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.652123 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.652130 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.652136 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.652143 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.652149 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.652156 | orchestrator | 2025-06-19 10:47:07.652178 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-19 10:47:07.652185 | orchestrator | Thursday 19 June 2025 10:43:49 +0000 (0:00:02.133) 0:05:28.784 ********* 2025-06-19 10:47:07.652200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.652208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.652215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.652222 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.652229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.652240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.652247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.652254 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.652267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-19 10:47:07.652275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.652281 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.652288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-19 10:47:07.652295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-19 10:47:07.652307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.652314 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.652320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-19 10:47:07.652334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.652341 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.652348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-19 10:47:07.652355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-19 10:47:07.652362 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.652368 | orchestrator | 2025-06-19 10:47:07.652375 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-19 10:47:07.652382 | orchestrator | Thursday 19 June 2025 10:43:52 +0000 (0:00:03.165) 0:05:31.950 ********* 2025-06-19 10:47:07.652393 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-19 10:47:07.652400 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-19 10:47:07.652406 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.652413 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-19 10:47:07.652419 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-19 10:47:07.652426 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.652433 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-19 10:47:07.652439 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-19 10:47:07.652446 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.652452 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-19 10:47:07.652459 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-19 10:47:07.652465 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.652472 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-19 10:47:07.652478 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-19 10:47:07.652485 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.652491 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-19 10:47:07.652498 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-19 10:47:07.652504 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.652510 | orchestrator | 2025-06-19 10:47:07.652517 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-19 10:47:07.652524 | orchestrator | Thursday 19 June 2025 10:43:53 +0000 (0:00:01.076) 0:05:33.027 ********* 2025-06-19 10:47:07.652530 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652553 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652579 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652599 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652639 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652661 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-19 10:47:07.652672 | orchestrator | 2025-06-19 10:47:07.652679 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-19 10:47:07.652686 | orchestrator | Thursday 19 June 2025 10:43:56 +0000 (0:00:03.247) 0:05:36.275 ********* 2025-06-19 10:47:07.652693 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.652699 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.652706 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.652712 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.652719 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.652725 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.652732 | orchestrator | 2025-06-19 10:47:07.652738 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-19 10:47:07.652745 | orchestrator | Thursday 19 June 2025 10:43:57 +0000 (0:00:00.692) 0:05:36.967 ********* 2025-06-19 10:47:07.652752 | orchestrator | 2025-06-19 10:47:07.652758 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-19 10:47:07.652765 | orchestrator | Thursday 19 June 2025 10:43:57 +0000 (0:00:00.126) 0:05:37.093 ********* 2025-06-19 10:47:07.652771 | orchestrator | 2025-06-19 10:47:07.652778 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-19 10:47:07.652784 | orchestrator | Thursday 19 June 2025 10:43:57 +0000 (0:00:00.120) 0:05:37.214 ********* 2025-06-19 10:47:07.652791 | orchestrator | 2025-06-19 10:47:07.652797 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-19 10:47:07.652804 | orchestrator | Thursday 19 June 2025 10:43:57 +0000 (0:00:00.121) 0:05:37.336 ********* 2025-06-19 10:47:07.652810 | orchestrator | 2025-06-19 10:47:07.652817 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-19 10:47:07.652824 | orchestrator | Thursday 19 June 2025 10:43:57 +0000 (0:00:00.221) 0:05:37.557 ********* 2025-06-19 10:47:07.652830 | orchestrator | 2025-06-19 10:47:07.652837 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-19 10:47:07.652843 | orchestrator | Thursday 19 June 2025 10:43:57 +0000 (0:00:00.121) 0:05:37.679 ********* 2025-06-19 10:47:07.652850 | orchestrator | 2025-06-19 10:47:07.652856 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-19 10:47:07.652863 | orchestrator | Thursday 19 June 2025 10:43:58 +0000 (0:00:00.128) 0:05:37.807 ********* 2025-06-19 10:47:07.652869 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.652876 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:47:07.652882 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:47:07.652889 | orchestrator | 2025-06-19 10:47:07.652895 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-19 10:47:07.652902 | orchestrator | Thursday 19 June 2025 10:44:11 +0000 (0:00:13.679) 0:05:51.486 ********* 2025-06-19 10:47:07.652908 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.652915 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:47:07.652922 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:47:07.652928 | orchestrator | 2025-06-19 10:47:07.652935 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-19 10:47:07.652941 | orchestrator | Thursday 19 June 2025 10:44:24 +0000 (0:00:12.417) 0:06:03.904 ********* 2025-06-19 10:47:07.652948 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.652954 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.652961 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.652967 | orchestrator | 2025-06-19 10:47:07.652974 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-19 10:47:07.652980 | orchestrator | Thursday 19 June 2025 10:44:45 +0000 (0:00:20.935) 0:06:24.840 ********* 2025-06-19 10:47:07.652987 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.652993 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.653000 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.653010 | orchestrator | 2025-06-19 10:47:07.653017 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-19 10:47:07.653023 | orchestrator | Thursday 19 June 2025 10:45:21 +0000 (0:00:36.624) 0:07:01.464 ********* 2025-06-19 10:47:07.653030 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-06-19 10:47:07.653036 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.653043 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2025-06-19 10:47:07.653050 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.653056 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.653062 | orchestrator | 2025-06-19 10:47:07.653072 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-19 10:47:07.653079 | orchestrator | Thursday 19 June 2025 10:45:28 +0000 (0:00:06.300) 0:07:07.764 ********* 2025-06-19 10:47:07.653085 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.653092 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.653098 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.653105 | orchestrator | 2025-06-19 10:47:07.653117 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-19 10:47:07.653124 | orchestrator | Thursday 19 June 2025 10:45:28 +0000 (0:00:00.815) 0:07:08.580 ********* 2025-06-19 10:47:07.653130 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:47:07.653137 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:47:07.653143 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:47:07.653150 | orchestrator | 2025-06-19 10:47:07.653156 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-19 10:47:07.653183 | orchestrator | Thursday 19 June 2025 10:45:55 +0000 (0:00:26.678) 0:07:35.258 ********* 2025-06-19 10:47:07.653190 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.653197 | orchestrator | 2025-06-19 10:47:07.653203 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-19 10:47:07.653210 | orchestrator | Thursday 19 June 2025 10:45:55 +0000 (0:00:00.115) 0:07:35.373 ********* 2025-06-19 10:47:07.653216 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.653223 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.653229 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.653236 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.653242 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.653249 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-19 10:47:07.653255 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:47:07.653262 | orchestrator | 2025-06-19 10:47:07.653268 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-19 10:47:07.653275 | orchestrator | Thursday 19 June 2025 10:46:17 +0000 (0:00:21.707) 0:07:57.081 ********* 2025-06-19 10:47:07.653281 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.653288 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.653295 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.653301 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.653307 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.653314 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.653320 | orchestrator | 2025-06-19 10:47:07.653327 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-19 10:47:07.653334 | orchestrator | Thursday 19 June 2025 10:46:26 +0000 (0:00:09.466) 0:08:06.548 ********* 2025-06-19 10:47:07.653340 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.653346 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.653353 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.653359 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.653366 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.653372 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-06-19 10:47:07.653386 | orchestrator | 2025-06-19 10:47:07.653392 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-19 10:47:07.653399 | orchestrator | Thursday 19 June 2025 10:46:30 +0000 (0:00:03.857) 0:08:10.406 ********* 2025-06-19 10:47:07.653405 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:47:07.653412 | orchestrator | 2025-06-19 10:47:07.653419 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-19 10:47:07.653425 | orchestrator | Thursday 19 June 2025 10:46:43 +0000 (0:00:12.896) 0:08:23.303 ********* 2025-06-19 10:47:07.653431 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:47:07.653438 | orchestrator | 2025-06-19 10:47:07.653444 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-19 10:47:07.653451 | orchestrator | Thursday 19 June 2025 10:46:45 +0000 (0:00:01.648) 0:08:24.951 ********* 2025-06-19 10:47:07.653458 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.653464 | orchestrator | 2025-06-19 10:47:07.653470 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-19 10:47:07.653477 | orchestrator | Thursday 19 June 2025 10:46:46 +0000 (0:00:01.610) 0:08:26.562 ********* 2025-06-19 10:47:07.653484 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:47:07.653490 | orchestrator | 2025-06-19 10:47:07.653496 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-19 10:47:07.653503 | orchestrator | Thursday 19 June 2025 10:46:58 +0000 (0:00:11.394) 0:08:37.957 ********* 2025-06-19 10:47:07.653510 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:47:07.653516 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:47:07.653523 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:47:07.653529 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:47:07.653536 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:47:07.653542 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:47:07.653549 | orchestrator | 2025-06-19 10:47:07.653555 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-19 10:47:07.653562 | orchestrator | 2025-06-19 10:47:07.653569 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-19 10:47:07.653575 | orchestrator | Thursday 19 June 2025 10:47:00 +0000 (0:00:01.878) 0:08:39.835 ********* 2025-06-19 10:47:07.653582 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:47:07.653588 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:47:07.653595 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:47:07.653601 | orchestrator | 2025-06-19 10:47:07.653608 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-19 10:47:07.653614 | orchestrator | 2025-06-19 10:47:07.653621 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-19 10:47:07.653627 | orchestrator | Thursday 19 June 2025 10:47:01 +0000 (0:00:01.019) 0:08:40.854 ********* 2025-06-19 10:47:07.653634 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.653640 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.653647 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.653653 | orchestrator | 2025-06-19 10:47:07.653663 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-19 10:47:07.653670 | orchestrator | 2025-06-19 10:47:07.653677 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-19 10:47:07.653683 | orchestrator | Thursday 19 June 2025 10:47:01 +0000 (0:00:00.689) 0:08:41.544 ********* 2025-06-19 10:47:07.653694 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-19 10:47:07.653700 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-19 10:47:07.653707 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-19 10:47:07.653714 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-19 10:47:07.653720 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-19 10:47:07.653727 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-19 10:47:07.653739 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:47:07.653745 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-19 10:47:07.653767 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-19 10:47:07.653774 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-19 10:47:07.653781 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-19 10:47:07.653787 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-19 10:47:07.653794 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-19 10:47:07.653800 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:47:07.653807 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-19 10:47:07.653813 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-19 10:47:07.653819 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-19 10:47:07.653826 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-19 10:47:07.653832 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-19 10:47:07.653839 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-19 10:47:07.653845 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:47:07.653852 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-19 10:47:07.653858 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-19 10:47:07.653865 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-19 10:47:07.653871 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-19 10:47:07.653877 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-19 10:47:07.653884 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-19 10:47:07.653890 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.653897 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-19 10:47:07.653903 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-19 10:47:07.653910 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-19 10:47:07.653916 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-19 10:47:07.653923 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-19 10:47:07.653929 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-19 10:47:07.653936 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.653942 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-19 10:47:07.653949 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-19 10:47:07.653955 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-19 10:47:07.653962 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-19 10:47:07.653968 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-19 10:47:07.653975 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-19 10:47:07.653981 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.653988 | orchestrator | 2025-06-19 10:47:07.653994 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-19 10:47:07.654001 | orchestrator | 2025-06-19 10:47:07.654007 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-19 10:47:07.654014 | orchestrator | Thursday 19 June 2025 10:47:02 +0000 (0:00:01.120) 0:08:42.665 ********* 2025-06-19 10:47:07.654042 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-19 10:47:07.654049 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-19 10:47:07.654055 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.654062 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-19 10:47:07.654068 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-19 10:47:07.654079 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.654086 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-19 10:47:07.654093 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-19 10:47:07.654099 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.654106 | orchestrator | 2025-06-19 10:47:07.654112 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-19 10:47:07.654119 | orchestrator | 2025-06-19 10:47:07.654126 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-19 10:47:07.654132 | orchestrator | Thursday 19 June 2025 10:47:03 +0000 (0:00:00.739) 0:08:43.405 ********* 2025-06-19 10:47:07.654139 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.654146 | orchestrator | 2025-06-19 10:47:07.654153 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-19 10:47:07.654172 | orchestrator | 2025-06-19 10:47:07.654181 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-19 10:47:07.654187 | orchestrator | Thursday 19 June 2025 10:47:04 +0000 (0:00:00.635) 0:08:44.040 ********* 2025-06-19 10:47:07.654199 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:47:07.654206 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:47:07.654212 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:47:07.654219 | orchestrator | 2025-06-19 10:47:07.654226 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:47:07.654236 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:47:07.654243 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-19 10:47:07.654250 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-19 10:47:07.654257 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-19 10:47:07.654263 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-19 10:47:07.654270 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-19 10:47:07.654276 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-06-19 10:47:07.654283 | orchestrator | 2025-06-19 10:47:07.654290 | orchestrator | 2025-06-19 10:47:07.654296 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:47:07.654303 | orchestrator | Thursday 19 June 2025 10:47:04 +0000 (0:00:00.607) 0:08:44.648 ********* 2025-06-19 10:47:07.654309 | orchestrator | =============================================================================== 2025-06-19 10:47:07.654316 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.62s 2025-06-19 10:47:07.654322 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.92s 2025-06-19 10:47:07.654329 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 26.68s 2025-06-19 10:47:07.654335 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.71s 2025-06-19 10:47:07.654342 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.13s 2025-06-19 10:47:07.654348 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.94s 2025-06-19 10:47:07.654355 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.70s 2025-06-19 10:47:07.654362 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.91s 2025-06-19 10:47:07.654373 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.40s 2025-06-19 10:47:07.654379 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 13.68s 2025-06-19 10:47:07.654386 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.01s 2025-06-19 10:47:07.654392 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.90s 2025-06-19 10:47:07.654399 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.50s 2025-06-19 10:47:07.654406 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 12.42s 2025-06-19 10:47:07.654412 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.97s 2025-06-19 10:47:07.654419 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.39s 2025-06-19 10:47:07.654425 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.47s 2025-06-19 10:47:07.654432 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.79s 2025-06-19 10:47:07.654439 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.02s 2025-06-19 10:47:07.654445 | orchestrator | nova-cell : Copying over nova.conf -------------------------------------- 7.95s 2025-06-19 10:47:07.654452 | orchestrator | 2025-06-19 10:47:07 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:07.654458 | orchestrator | 2025-06-19 10:47:07 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:10.685311 | orchestrator | 2025-06-19 10:47:10 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:10.685420 | orchestrator | 2025-06-19 10:47:10 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:13.728902 | orchestrator | 2025-06-19 10:47:13 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:13.729002 | orchestrator | 2025-06-19 10:47:13 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:16.774765 | orchestrator | 2025-06-19 10:47:16 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:16.774871 | orchestrator | 2025-06-19 10:47:16 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:19.820771 | orchestrator | 2025-06-19 10:47:19 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:19.820885 | orchestrator | 2025-06-19 10:47:19 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:22.863854 | orchestrator | 2025-06-19 10:47:22 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:22.863984 | orchestrator | 2025-06-19 10:47:22 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:25.906004 | orchestrator | 2025-06-19 10:47:25 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:25.906225 | orchestrator | 2025-06-19 10:47:25 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:28.947272 | orchestrator | 2025-06-19 10:47:28 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:28.947385 | orchestrator | 2025-06-19 10:47:28 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:31.987864 | orchestrator | 2025-06-19 10:47:31 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:31.988881 | orchestrator | 2025-06-19 10:47:31 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:35.025910 | orchestrator | 2025-06-19 10:47:35 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:35.026082 | orchestrator | 2025-06-19 10:47:35 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:38.088129 | orchestrator | 2025-06-19 10:47:38 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:38.088270 | orchestrator | 2025-06-19 10:47:38 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:41.128469 | orchestrator | 2025-06-19 10:47:41 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:41.128976 | orchestrator | 2025-06-19 10:47:41 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:44.178248 | orchestrator | 2025-06-19 10:47:44 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:44.178337 | orchestrator | 2025-06-19 10:47:44 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:47.222513 | orchestrator | 2025-06-19 10:47:47 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:47.222625 | orchestrator | 2025-06-19 10:47:47 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:50.265063 | orchestrator | 2025-06-19 10:47:50 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:50.265205 | orchestrator | 2025-06-19 10:47:50 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:53.304307 | orchestrator | 2025-06-19 10:47:53 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:53.304417 | orchestrator | 2025-06-19 10:47:53 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:56.350814 | orchestrator | 2025-06-19 10:47:56 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:56.350928 | orchestrator | 2025-06-19 10:47:56 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:47:59.413406 | orchestrator | 2025-06-19 10:47:59 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:47:59.413516 | orchestrator | 2025-06-19 10:47:59 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:02.459675 | orchestrator | 2025-06-19 10:48:02 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:02.459755 | orchestrator | 2025-06-19 10:48:02 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:05.503925 | orchestrator | 2025-06-19 10:48:05 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:05.504033 | orchestrator | 2025-06-19 10:48:05 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:08.556797 | orchestrator | 2025-06-19 10:48:08 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:08.556901 | orchestrator | 2025-06-19 10:48:08 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:11.596444 | orchestrator | 2025-06-19 10:48:11 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:11.596559 | orchestrator | 2025-06-19 10:48:11 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:14.627638 | orchestrator | 2025-06-19 10:48:14 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:14.627739 | orchestrator | 2025-06-19 10:48:14 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:17.662971 | orchestrator | 2025-06-19 10:48:17 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:17.663075 | orchestrator | 2025-06-19 10:48:17 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:20.707062 | orchestrator | 2025-06-19 10:48:20 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:20.707248 | orchestrator | 2025-06-19 10:48:20 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:23.746297 | orchestrator | 2025-06-19 10:48:23 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:23.746392 | orchestrator | 2025-06-19 10:48:23 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:26.788946 | orchestrator | 2025-06-19 10:48:26 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:26.789066 | orchestrator | 2025-06-19 10:48:26 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:29.839789 | orchestrator | 2025-06-19 10:48:29 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:29.839913 | orchestrator | 2025-06-19 10:48:29 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:32.880748 | orchestrator | 2025-06-19 10:48:32 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:32.880852 | orchestrator | 2025-06-19 10:48:32 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:35.928928 | orchestrator | 2025-06-19 10:48:35 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:35.929035 | orchestrator | 2025-06-19 10:48:35 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:38.972473 | orchestrator | 2025-06-19 10:48:38 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:38.972581 | orchestrator | 2025-06-19 10:48:38 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:42.021484 | orchestrator | 2025-06-19 10:48:42 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:42.021590 | orchestrator | 2025-06-19 10:48:42 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:45.061387 | orchestrator | 2025-06-19 10:48:45 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:45.063633 | orchestrator | 2025-06-19 10:48:45 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:48.110735 | orchestrator | 2025-06-19 10:48:48 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:48.110842 | orchestrator | 2025-06-19 10:48:48 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:51.158260 | orchestrator | 2025-06-19 10:48:51 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:51.158363 | orchestrator | 2025-06-19 10:48:51 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:54.221796 | orchestrator | 2025-06-19 10:48:54 | INFO  | Task 50bd16f6-6f0f-4a95-a2e0-e83ce57e7df9 is in state STARTED 2025-06-19 10:48:54.222132 | orchestrator | 2025-06-19 10:48:54 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:54.222211 | orchestrator | 2025-06-19 10:48:54 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:48:57.260838 | orchestrator | 2025-06-19 10:48:57 | INFO  | Task 50bd16f6-6f0f-4a95-a2e0-e83ce57e7df9 is in state STARTED 2025-06-19 10:48:57.264651 | orchestrator | 2025-06-19 10:48:57 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:48:57.264702 | orchestrator | 2025-06-19 10:48:57 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:00.302182 | orchestrator | 2025-06-19 10:49:00 | INFO  | Task 50bd16f6-6f0f-4a95-a2e0-e83ce57e7df9 is in state STARTED 2025-06-19 10:49:00.303265 | orchestrator | 2025-06-19 10:49:00 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:00.303315 | orchestrator | 2025-06-19 10:49:00 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:03.364690 | orchestrator | 2025-06-19 10:49:03 | INFO  | Task 50bd16f6-6f0f-4a95-a2e0-e83ce57e7df9 is in state STARTED 2025-06-19 10:49:03.365939 | orchestrator | 2025-06-19 10:49:03 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:03.365970 | orchestrator | 2025-06-19 10:49:03 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:06.416450 | orchestrator | 2025-06-19 10:49:06 | INFO  | Task 50bd16f6-6f0f-4a95-a2e0-e83ce57e7df9 is in state STARTED 2025-06-19 10:49:06.418408 | orchestrator | 2025-06-19 10:49:06 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:06.418707 | orchestrator | 2025-06-19 10:49:06 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:09.461719 | orchestrator | 2025-06-19 10:49:09 | INFO  | Task 50bd16f6-6f0f-4a95-a2e0-e83ce57e7df9 is in state STARTED 2025-06-19 10:49:09.463855 | orchestrator | 2025-06-19 10:49:09 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:09.463897 | orchestrator | 2025-06-19 10:49:09 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:12.509404 | orchestrator | 2025-06-19 10:49:12 | INFO  | Task 50bd16f6-6f0f-4a95-a2e0-e83ce57e7df9 is in state SUCCESS 2025-06-19 10:49:12.512492 | orchestrator | 2025-06-19 10:49:12 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:12.512526 | orchestrator | 2025-06-19 10:49:12 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:15.571305 | orchestrator | 2025-06-19 10:49:15 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:15.571406 | orchestrator | 2025-06-19 10:49:15 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:18.621783 | orchestrator | 2025-06-19 10:49:18 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:18.621875 | orchestrator | 2025-06-19 10:49:18 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:21.663997 | orchestrator | 2025-06-19 10:49:21 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:21.664100 | orchestrator | 2025-06-19 10:49:21 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:24.705375 | orchestrator | 2025-06-19 10:49:24 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:24.705520 | orchestrator | 2025-06-19 10:49:24 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:27.749328 | orchestrator | 2025-06-19 10:49:27 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:27.749429 | orchestrator | 2025-06-19 10:49:27 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:30.799718 | orchestrator | 2025-06-19 10:49:30 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:30.799840 | orchestrator | 2025-06-19 10:49:30 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:33.859085 | orchestrator | 2025-06-19 10:49:33 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:33.859211 | orchestrator | 2025-06-19 10:49:33 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:36.905320 | orchestrator | 2025-06-19 10:49:36 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state STARTED 2025-06-19 10:49:36.905435 | orchestrator | 2025-06-19 10:49:36 | INFO  | Wait 1 second(s) until the next check 2025-06-19 10:49:39.949093 | orchestrator | 2025-06-19 10:49:39 | INFO  | Task 4a2caed9-4eb7-4bfd-9bb8-aeb179a521f1 is in state SUCCESS 2025-06-19 10:49:39.950921 | orchestrator | 2025-06-19 10:49:39.950964 | orchestrator | None 2025-06-19 10:49:39.950977 | orchestrator | 2025-06-19 10:49:39.950988 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:49:39.951025 | orchestrator | 2025-06-19 10:49:39.951037 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:49:39.951048 | orchestrator | Thursday 19 June 2025 10:45:03 +0000 (0:00:00.300) 0:00:00.300 ********* 2025-06-19 10:49:39.951059 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:49:39.951071 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:49:39.951082 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:49:39.951092 | orchestrator | 2025-06-19 10:49:39.951103 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:49:39.951113 | orchestrator | Thursday 19 June 2025 10:45:04 +0000 (0:00:00.306) 0:00:00.606 ********* 2025-06-19 10:49:39.951124 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-19 10:49:39.951135 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-19 10:49:39.951145 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-19 10:49:39.951156 | orchestrator | 2025-06-19 10:49:39.951906 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-19 10:49:39.951922 | orchestrator | 2025-06-19 10:49:39.951933 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-19 10:49:39.951943 | orchestrator | Thursday 19 June 2025 10:45:04 +0000 (0:00:00.426) 0:00:01.032 ********* 2025-06-19 10:49:39.951954 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:49:39.951966 | orchestrator | 2025-06-19 10:49:39.951977 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-19 10:49:39.951987 | orchestrator | Thursday 19 June 2025 10:45:05 +0000 (0:00:00.607) 0:00:01.639 ********* 2025-06-19 10:49:39.951998 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-19 10:49:39.952008 | orchestrator | 2025-06-19 10:49:39.952019 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-19 10:49:39.952029 | orchestrator | Thursday 19 June 2025 10:45:08 +0000 (0:00:03.428) 0:00:05.068 ********* 2025-06-19 10:49:39.952040 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-19 10:49:39.952051 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-19 10:49:39.952062 | orchestrator | 2025-06-19 10:49:39.952085 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-19 10:49:39.952097 | orchestrator | Thursday 19 June 2025 10:45:15 +0000 (0:00:06.566) 0:00:11.634 ********* 2025-06-19 10:49:39.952108 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-19 10:49:39.952118 | orchestrator | 2025-06-19 10:49:39.952129 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-19 10:49:39.952139 | orchestrator | Thursday 19 June 2025 10:45:18 +0000 (0:00:03.235) 0:00:14.869 ********* 2025-06-19 10:49:39.952199 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-19 10:49:39.952221 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-19 10:49:39.952241 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-19 10:49:39.952259 | orchestrator | 2025-06-19 10:49:39.952271 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-19 10:49:39.952282 | orchestrator | Thursday 19 June 2025 10:45:26 +0000 (0:00:08.235) 0:00:23.104 ********* 2025-06-19 10:49:39.952292 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-19 10:49:39.952303 | orchestrator | 2025-06-19 10:49:39.952313 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-19 10:49:39.952324 | orchestrator | Thursday 19 June 2025 10:45:29 +0000 (0:00:03.366) 0:00:26.471 ********* 2025-06-19 10:49:39.952335 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-19 10:49:39.952346 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-19 10:49:39.952356 | orchestrator | 2025-06-19 10:49:39.952381 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-19 10:49:39.952392 | orchestrator | Thursday 19 June 2025 10:45:37 +0000 (0:00:07.456) 0:00:33.928 ********* 2025-06-19 10:49:39.952402 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-19 10:49:39.952413 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-19 10:49:39.952423 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-19 10:49:39.952434 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-19 10:49:39.952444 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-19 10:49:39.952455 | orchestrator | 2025-06-19 10:49:39.952465 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-19 10:49:39.952476 | orchestrator | Thursday 19 June 2025 10:45:52 +0000 (0:00:15.472) 0:00:49.400 ********* 2025-06-19 10:49:39.952487 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:49:39.952499 | orchestrator | 2025-06-19 10:49:39.952511 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-19 10:49:39.952523 | orchestrator | Thursday 19 June 2025 10:45:53 +0000 (0:00:00.602) 0:00:50.002 ********* 2025-06-19 10:49:39.952535 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.952548 | orchestrator | 2025-06-19 10:49:39.952560 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-06-19 10:49:39.952572 | orchestrator | Thursday 19 June 2025 10:45:58 +0000 (0:00:04.942) 0:00:54.945 ********* 2025-06-19 10:49:39.952584 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.952596 | orchestrator | 2025-06-19 10:49:39.952608 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-19 10:49:39.952670 | orchestrator | Thursday 19 June 2025 10:46:03 +0000 (0:00:05.178) 0:01:00.123 ********* 2025-06-19 10:49:39.952684 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:49:39.952696 | orchestrator | 2025-06-19 10:49:39.952708 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-06-19 10:49:39.952720 | orchestrator | Thursday 19 June 2025 10:46:06 +0000 (0:00:03.226) 0:01:03.349 ********* 2025-06-19 10:49:39.952731 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-19 10:49:39.952743 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-19 10:49:39.952756 | orchestrator | 2025-06-19 10:49:39.952769 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-06-19 10:49:39.952781 | orchestrator | Thursday 19 June 2025 10:46:18 +0000 (0:00:11.538) 0:01:14.888 ********* 2025-06-19 10:49:39.952793 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-06-19 10:49:39.952805 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-06-19 10:49:39.952818 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-06-19 10:49:39.952829 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-06-19 10:49:39.952840 | orchestrator | 2025-06-19 10:49:39.952850 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-06-19 10:49:39.952861 | orchestrator | Thursday 19 June 2025 10:46:34 +0000 (0:00:16.087) 0:01:30.976 ********* 2025-06-19 10:49:39.952872 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.952882 | orchestrator | 2025-06-19 10:49:39.952893 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-06-19 10:49:39.952903 | orchestrator | Thursday 19 June 2025 10:46:39 +0000 (0:00:04.625) 0:01:35.602 ********* 2025-06-19 10:49:39.952914 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.952924 | orchestrator | 2025-06-19 10:49:39.952942 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-06-19 10:49:39.952953 | orchestrator | Thursday 19 June 2025 10:46:44 +0000 (0:00:05.285) 0:01:40.888 ********* 2025-06-19 10:49:39.952963 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:49:39.952974 | orchestrator | 2025-06-19 10:49:39.952990 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-06-19 10:49:39.953001 | orchestrator | Thursday 19 June 2025 10:46:44 +0000 (0:00:00.209) 0:01:41.097 ********* 2025-06-19 10:49:39.953012 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.953022 | orchestrator | 2025-06-19 10:49:39.953033 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-19 10:49:39.953043 | orchestrator | Thursday 19 June 2025 10:46:48 +0000 (0:00:04.216) 0:01:45.313 ********* 2025-06-19 10:49:39.953054 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:49:39.953065 | orchestrator | 2025-06-19 10:49:39.953075 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-06-19 10:49:39.953086 | orchestrator | Thursday 19 June 2025 10:46:49 +0000 (0:00:00.956) 0:01:46.269 ********* 2025-06-19 10:49:39.953097 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.953107 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.953118 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.953128 | orchestrator | 2025-06-19 10:49:39.953138 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-06-19 10:49:39.953149 | orchestrator | Thursday 19 June 2025 10:46:54 +0000 (0:00:05.130) 0:01:51.399 ********* 2025-06-19 10:49:39.953183 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.953196 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.953207 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.953217 | orchestrator | 2025-06-19 10:49:39.953228 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-06-19 10:49:39.953239 | orchestrator | Thursday 19 June 2025 10:46:59 +0000 (0:00:04.289) 0:01:55.689 ********* 2025-06-19 10:49:39.953249 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.953260 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.953270 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.953281 | orchestrator | 2025-06-19 10:49:39.953291 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-06-19 10:49:39.953302 | orchestrator | Thursday 19 June 2025 10:46:59 +0000 (0:00:00.836) 0:01:56.525 ********* 2025-06-19 10:49:39.953312 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:49:39.953323 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:49:39.953334 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:49:39.953344 | orchestrator | 2025-06-19 10:49:39.953355 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-06-19 10:49:39.953366 | orchestrator | Thursday 19 June 2025 10:47:01 +0000 (0:00:01.997) 0:01:58.523 ********* 2025-06-19 10:49:39.953376 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.953387 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.953397 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.953408 | orchestrator | 2025-06-19 10:49:39.953419 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-06-19 10:49:39.953429 | orchestrator | Thursday 19 June 2025 10:47:03 +0000 (0:00:01.253) 0:01:59.776 ********* 2025-06-19 10:49:39.953440 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.953450 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.953461 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.953471 | orchestrator | 2025-06-19 10:49:39.953482 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-06-19 10:49:39.953492 | orchestrator | Thursday 19 June 2025 10:47:04 +0000 (0:00:01.180) 0:02:00.956 ********* 2025-06-19 10:49:39.953503 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.953514 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.953524 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.953542 | orchestrator | 2025-06-19 10:49:39.953588 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-06-19 10:49:39.953601 | orchestrator | Thursday 19 June 2025 10:47:06 +0000 (0:00:02.048) 0:02:03.005 ********* 2025-06-19 10:49:39.953611 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.953622 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.953632 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.953643 | orchestrator | 2025-06-19 10:49:39.953653 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-06-19 10:49:39.953664 | orchestrator | Thursday 19 June 2025 10:47:08 +0000 (0:00:01.746) 0:02:04.752 ********* 2025-06-19 10:49:39.953674 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:49:39.953685 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:49:39.953695 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:49:39.953705 | orchestrator | 2025-06-19 10:49:39.953716 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-06-19 10:49:39.953727 | orchestrator | Thursday 19 June 2025 10:47:08 +0000 (0:00:00.653) 0:02:05.406 ********* 2025-06-19 10:49:39.953737 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:49:39.953748 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:49:39.953758 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:49:39.953768 | orchestrator | 2025-06-19 10:49:39.953779 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-19 10:49:39.953790 | orchestrator | Thursday 19 June 2025 10:47:11 +0000 (0:00:02.803) 0:02:08.210 ********* 2025-06-19 10:49:39.953800 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:49:39.953811 | orchestrator | 2025-06-19 10:49:39.953821 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-06-19 10:49:39.953832 | orchestrator | Thursday 19 June 2025 10:47:12 +0000 (0:00:00.723) 0:02:08.934 ********* 2025-06-19 10:49:39.953842 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:49:39.953853 | orchestrator | 2025-06-19 10:49:39.953864 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-19 10:49:39.953874 | orchestrator | Thursday 19 June 2025 10:47:16 +0000 (0:00:03.780) 0:02:12.714 ********* 2025-06-19 10:49:39.953885 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:49:39.953895 | orchestrator | 2025-06-19 10:49:39.953906 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-06-19 10:49:39.953916 | orchestrator | Thursday 19 June 2025 10:47:19 +0000 (0:00:03.008) 0:02:15.723 ********* 2025-06-19 10:49:39.953927 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-19 10:49:39.953937 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-19 10:49:39.953948 | orchestrator | 2025-06-19 10:49:39.953963 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-06-19 10:49:39.953974 | orchestrator | Thursday 19 June 2025 10:47:26 +0000 (0:00:07.774) 0:02:23.497 ********* 2025-06-19 10:49:39.953984 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:49:39.953995 | orchestrator | 2025-06-19 10:49:39.954005 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-06-19 10:49:39.954088 | orchestrator | Thursday 19 June 2025 10:47:30 +0000 (0:00:03.452) 0:02:26.950 ********* 2025-06-19 10:49:39.954103 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:49:39.954114 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:49:39.954124 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:49:39.954135 | orchestrator | 2025-06-19 10:49:39.954145 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-06-19 10:49:39.954156 | orchestrator | Thursday 19 June 2025 10:47:30 +0000 (0:00:00.309) 0:02:27.259 ********* 2025-06-19 10:49:39.954231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.954294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.954307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.954318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.954335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.954346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.954364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.954375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.954413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.954424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.954434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.954449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.954459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.954475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.954485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.954495 | orchestrator | 2025-06-19 10:49:39.954505 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-06-19 10:49:39.954515 | orchestrator | Thursday 19 June 2025 10:47:33 +0000 (0:00:02.388) 0:02:29.648 ********* 2025-06-19 10:49:39.954524 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:49:39.954537 | orchestrator | 2025-06-19 10:49:39.954573 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-06-19 10:49:39.954584 | orchestrator | Thursday 19 June 2025 10:47:33 +0000 (0:00:00.125) 0:02:29.774 ********* 2025-06-19 10:49:39.954593 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:49:39.954603 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:49:39.954612 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:49:39.954622 | orchestrator | 2025-06-19 10:49:39.954631 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-06-19 10:49:39.954641 | orchestrator | Thursday 19 June 2025 10:47:33 +0000 (0:00:00.504) 0:02:30.278 ********* 2025-06-19 10:49:39.954651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:49:39.954666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:49:39.954728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.954738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.954748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:49:39.954758 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:49:39.954798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:49:39.954810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:49:39.954819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.954845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.954856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:49:39.954865 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:49:39.954875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:49:39.954913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:49:39.954925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.954935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.954955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:49:39.954965 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:49:39.954975 | orchestrator | 2025-06-19 10:49:39.954984 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-19 10:49:39.954994 | orchestrator | Thursday 19 June 2025 10:47:34 +0000 (0:00:00.680) 0:02:30.958 ********* 2025-06-19 10:49:39.955004 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:49:39.955014 | orchestrator | 2025-06-19 10:49:39.955023 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-06-19 10:49:39.955033 | orchestrator | Thursday 19 June 2025 10:47:34 +0000 (0:00:00.538) 0:02:31.496 ********* 2025-06-19 10:49:39.955043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.955080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.955092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.955113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.955124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.955134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.955144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955307 | orchestrator | 2025-06-19 10:49:39.955315 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-06-19 10:49:39.955323 | orchestrator | Thursday 19 June 2025 10:47:40 +0000 (0:00:05.523) 0:02:37.020 ********* 2025-06-19 10:49:39.955331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:49:39.955347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:49:39.955359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:49:39.955383 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:49:39.955395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:49:39.955403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:49:39.955417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:49:39.955445 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:49:39.955453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:49:39.955465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:49:39.955473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:49:39.955506 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:49:39.955514 | orchestrator | 2025-06-19 10:49:39.955522 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-06-19 10:49:39.955530 | orchestrator | Thursday 19 June 2025 10:47:41 +0000 (0:00:00.690) 0:02:37.710 ********* 2025-06-19 10:49:39.955538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:49:39.955546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:49:39.955554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:49:39.955591 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:49:39.955605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:49:39.955613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:49:39.955621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:49:39.955657 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:49:39.955666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-19 10:49:39.955677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-19 10:49:39.955686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-19 10:49:39.955702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-19 10:49:39.955714 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:49:39.955722 | orchestrator | 2025-06-19 10:49:39.955730 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-06-19 10:49:39.955738 | orchestrator | Thursday 19 June 2025 10:47:42 +0000 (0:00:00.889) 0:02:38.600 ********* 2025-06-19 10:49:39.955752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.955761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.955772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.955781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.955789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.955802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.955815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.955901 | orchestrator | 2025-06-19 10:49:39.955909 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-06-19 10:49:39.955917 | orchestrator | Thursday 19 June 2025 10:47:47 +0000 (0:00:05.348) 0:02:43.949 ********* 2025-06-19 10:49:39.955925 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-19 10:49:39.955933 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-19 10:49:39.955945 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-19 10:49:39.955953 | orchestrator | 2025-06-19 10:49:39.955961 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-06-19 10:49:39.955969 | orchestrator | Thursday 19 June 2025 10:47:49 +0000 (0:00:01.877) 0:02:45.826 ********* 2025-06-19 10:49:39.955977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.955990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.956004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.956012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.956020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.956032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.956040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956130 | orchestrator | 2025-06-19 10:49:39.956137 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-06-19 10:49:39.956145 | orchestrator | Thursday 19 June 2025 10:48:05 +0000 (0:00:16.347) 0:03:02.174 ********* 2025-06-19 10:49:39.956153 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.956183 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.956198 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.956211 | orchestrator | 2025-06-19 10:49:39.956225 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-06-19 10:49:39.956233 | orchestrator | Thursday 19 June 2025 10:48:07 +0000 (0:00:01.663) 0:03:03.837 ********* 2025-06-19 10:49:39.956241 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-19 10:49:39.956249 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-19 10:49:39.956261 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-19 10:49:39.956269 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-19 10:49:39.956277 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-19 10:49:39.956284 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-19 10:49:39.956292 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-19 10:49:39.956300 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-19 10:49:39.956308 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-19 10:49:39.956315 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-19 10:49:39.956323 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-19 10:49:39.956330 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-19 10:49:39.956338 | orchestrator | 2025-06-19 10:49:39.956346 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-06-19 10:49:39.956353 | orchestrator | Thursday 19 June 2025 10:48:12 +0000 (0:00:05.120) 0:03:08.957 ********* 2025-06-19 10:49:39.956361 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-19 10:49:39.956369 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-19 10:49:39.956376 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-19 10:49:39.956384 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-19 10:49:39.956392 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-19 10:49:39.956399 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-19 10:49:39.956407 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-19 10:49:39.956415 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-19 10:49:39.956422 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-19 10:49:39.956435 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-19 10:49:39.956443 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-19 10:49:39.956451 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-19 10:49:39.956458 | orchestrator | 2025-06-19 10:49:39.956466 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-06-19 10:49:39.956480 | orchestrator | Thursday 19 June 2025 10:48:17 +0000 (0:00:05.092) 0:03:14.050 ********* 2025-06-19 10:49:39.956488 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-19 10:49:39.956496 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-19 10:49:39.956504 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-19 10:49:39.956511 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-19 10:49:39.956519 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-19 10:49:39.956526 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-19 10:49:39.956534 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-19 10:49:39.956542 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-19 10:49:39.956549 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-19 10:49:39.956557 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-19 10:49:39.956565 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-19 10:49:39.956572 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-19 10:49:39.956580 | orchestrator | 2025-06-19 10:49:39.956587 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-06-19 10:49:39.956595 | orchestrator | Thursday 19 June 2025 10:48:22 +0000 (0:00:04.909) 0:03:18.959 ********* 2025-06-19 10:49:39.956603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.956616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.956625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-19 10:49:39.956642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.956650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.956658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-19 10:49:39.956666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-19 10:49:39.956775 | orchestrator | 2025-06-19 10:49:39.956783 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-19 10:49:39.956796 | orchestrator | Thursday 19 June 2025 10:48:26 +0000 (0:00:03.653) 0:03:22.613 ********* 2025-06-19 10:49:39.956804 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:49:39.956812 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:49:39.956819 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:49:39.956827 | orchestrator | 2025-06-19 10:49:39.956835 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-06-19 10:49:39.956843 | orchestrator | Thursday 19 June 2025 10:48:26 +0000 (0:00:00.312) 0:03:22.925 ********* 2025-06-19 10:49:39.956850 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.956858 | orchestrator | 2025-06-19 10:49:39.956866 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-06-19 10:49:39.956874 | orchestrator | Thursday 19 June 2025 10:48:28 +0000 (0:00:02.118) 0:03:25.044 ********* 2025-06-19 10:49:39.956882 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.956889 | orchestrator | 2025-06-19 10:49:39.956897 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-06-19 10:49:39.956905 | orchestrator | Thursday 19 June 2025 10:48:30 +0000 (0:00:02.077) 0:03:27.121 ********* 2025-06-19 10:49:39.956913 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.956920 | orchestrator | 2025-06-19 10:49:39.956928 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-06-19 10:49:39.956936 | orchestrator | Thursday 19 June 2025 10:48:33 +0000 (0:00:02.583) 0:03:29.705 ********* 2025-06-19 10:49:39.956944 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.956952 | orchestrator | 2025-06-19 10:49:39.956959 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-06-19 10:49:39.956967 | orchestrator | Thursday 19 June 2025 10:48:35 +0000 (0:00:02.210) 0:03:31.916 ********* 2025-06-19 10:49:39.956975 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.956983 | orchestrator | 2025-06-19 10:49:39.956991 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-19 10:49:39.956998 | orchestrator | Thursday 19 June 2025 10:48:55 +0000 (0:00:20.109) 0:03:52.026 ********* 2025-06-19 10:49:39.957006 | orchestrator | 2025-06-19 10:49:39.957014 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-19 10:49:39.957026 | orchestrator | Thursday 19 June 2025 10:48:55 +0000 (0:00:00.076) 0:03:52.102 ********* 2025-06-19 10:49:39.957034 | orchestrator | 2025-06-19 10:49:39.957041 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-19 10:49:39.957049 | orchestrator | Thursday 19 June 2025 10:48:55 +0000 (0:00:00.070) 0:03:52.172 ********* 2025-06-19 10:49:39.957057 | orchestrator | 2025-06-19 10:49:39.957065 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-06-19 10:49:39.957072 | orchestrator | Thursday 19 June 2025 10:48:55 +0000 (0:00:00.071) 0:03:52.244 ********* 2025-06-19 10:49:39.957080 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.957088 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.957096 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.957104 | orchestrator | 2025-06-19 10:49:39.957111 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-06-19 10:49:39.957119 | orchestrator | Thursday 19 June 2025 10:49:06 +0000 (0:00:10.875) 0:04:03.120 ********* 2025-06-19 10:49:39.957127 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.957135 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.957143 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.957150 | orchestrator | 2025-06-19 10:49:39.957158 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-06-19 10:49:39.957187 | orchestrator | Thursday 19 June 2025 10:49:12 +0000 (0:00:06.416) 0:04:09.537 ********* 2025-06-19 10:49:39.957195 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.957203 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.957210 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.957218 | orchestrator | 2025-06-19 10:49:39.957226 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-06-19 10:49:39.957239 | orchestrator | Thursday 19 June 2025 10:49:21 +0000 (0:00:08.980) 0:04:18.517 ********* 2025-06-19 10:49:39.957247 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.957255 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.957263 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.957271 | orchestrator | 2025-06-19 10:49:39.957278 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-06-19 10:49:39.957286 | orchestrator | Thursday 19 June 2025 10:49:32 +0000 (0:00:10.207) 0:04:28.724 ********* 2025-06-19 10:49:39.957294 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:49:39.957301 | orchestrator | changed: [testbed-node-1] 2025-06-19 10:49:39.957309 | orchestrator | changed: [testbed-node-2] 2025-06-19 10:49:39.957316 | orchestrator | 2025-06-19 10:49:39.957324 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:49:39.957332 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-19 10:49:39.957340 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:49:39.957348 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-19 10:49:39.957356 | orchestrator | 2025-06-19 10:49:39.957363 | orchestrator | 2025-06-19 10:49:39.957371 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:49:39.957379 | orchestrator | Thursday 19 June 2025 10:49:37 +0000 (0:00:05.620) 0:04:34.345 ********* 2025-06-19 10:49:39.957391 | orchestrator | =============================================================================== 2025-06-19 10:49:39.957399 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.11s 2025-06-19 10:49:39.957406 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.35s 2025-06-19 10:49:39.957414 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.09s 2025-06-19 10:49:39.957421 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.47s 2025-06-19 10:49:39.957429 | orchestrator | octavia : Create security groups for octavia --------------------------- 11.54s 2025-06-19 10:49:39.957437 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.88s 2025-06-19 10:49:39.957444 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.21s 2025-06-19 10:49:39.957452 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.98s 2025-06-19 10:49:39.957460 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.24s 2025-06-19 10:49:39.957467 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.77s 2025-06-19 10:49:39.957475 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.46s 2025-06-19 10:49:39.957483 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.57s 2025-06-19 10:49:39.957490 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.42s 2025-06-19 10:49:39.957498 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.62s 2025-06-19 10:49:39.957506 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.52s 2025-06-19 10:49:39.957513 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.35s 2025-06-19 10:49:39.957521 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.29s 2025-06-19 10:49:39.957529 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.18s 2025-06-19 10:49:39.957536 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.13s 2025-06-19 10:49:39.957544 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.12s 2025-06-19 10:49:39.957556 | orchestrator | 2025-06-19 10:49:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:49:42.990852 | orchestrator | 2025-06-19 10:49:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:49:46.033307 | orchestrator | 2025-06-19 10:49:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:49:49.079390 | orchestrator | 2025-06-19 10:49:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:49:52.115089 | orchestrator | 2025-06-19 10:49:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:49:55.153857 | orchestrator | 2025-06-19 10:49:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:49:58.190817 | orchestrator | 2025-06-19 10:49:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:01.238092 | orchestrator | 2025-06-19 10:50:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:04.279735 | orchestrator | 2025-06-19 10:50:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:07.319397 | orchestrator | 2025-06-19 10:50:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:10.362546 | orchestrator | 2025-06-19 10:50:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:13.403430 | orchestrator | 2025-06-19 10:50:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:16.451735 | orchestrator | 2025-06-19 10:50:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:19.496422 | orchestrator | 2025-06-19 10:50:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:22.539297 | orchestrator | 2025-06-19 10:50:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:25.579909 | orchestrator | 2025-06-19 10:50:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:28.617913 | orchestrator | 2025-06-19 10:50:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:31.661376 | orchestrator | 2025-06-19 10:50:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:34.703345 | orchestrator | 2025-06-19 10:50:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:37.743991 | orchestrator | 2025-06-19 10:50:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-19 10:50:40.783369 | orchestrator | 2025-06-19 10:50:41.029912 | orchestrator | 2025-06-19 10:50:41.034013 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Jun 19 10:50:41 UTC 2025 2025-06-19 10:50:41.034083 | orchestrator | 2025-06-19 10:50:41.505081 | orchestrator | ok: Runtime: 0:33:46.790992 2025-06-19 10:50:41.769812 | 2025-06-19 10:50:41.769966 | TASK [Bootstrap services] 2025-06-19 10:50:42.513717 | orchestrator | 2025-06-19 10:50:42.513895 | orchestrator | # BOOTSTRAP 2025-06-19 10:50:42.513921 | orchestrator | 2025-06-19 10:50:42.513936 | orchestrator | + set -e 2025-06-19 10:50:42.513949 | orchestrator | + echo 2025-06-19 10:50:42.513963 | orchestrator | + echo '# BOOTSTRAP' 2025-06-19 10:50:42.513982 | orchestrator | + echo 2025-06-19 10:50:42.514068 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-19 10:50:42.525110 | orchestrator | + set -e 2025-06-19 10:50:42.525161 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-19 10:50:46.192129 | orchestrator | 2025-06-19 10:50:46 | INFO  | It takes a moment until task 8a3d5c74-3e2f-43ee-9032-49bd27921a34 (flavor-manager) has been started and output is visible here. 2025-06-19 10:50:53.797324 | orchestrator | 2025-06-19 10:50:49 | INFO  | Flavor SCS-1V-4 created 2025-06-19 10:50:53.797470 | orchestrator | 2025-06-19 10:50:50 | INFO  | Flavor SCS-2V-8 created 2025-06-19 10:50:53.797490 | orchestrator | 2025-06-19 10:50:50 | INFO  | Flavor SCS-4V-16 created 2025-06-19 10:50:53.797503 | orchestrator | 2025-06-19 10:50:50 | INFO  | Flavor SCS-8V-32 created 2025-06-19 10:50:53.797515 | orchestrator | 2025-06-19 10:50:50 | INFO  | Flavor SCS-1V-2 created 2025-06-19 10:50:53.797526 | orchestrator | 2025-06-19 10:50:50 | INFO  | Flavor SCS-2V-4 created 2025-06-19 10:50:53.797537 | orchestrator | 2025-06-19 10:50:50 | INFO  | Flavor SCS-4V-8 created 2025-06-19 10:50:53.797549 | orchestrator | 2025-06-19 10:50:50 | INFO  | Flavor SCS-8V-16 created 2025-06-19 10:50:53.797574 | orchestrator | 2025-06-19 10:50:51 | INFO  | Flavor SCS-16V-32 created 2025-06-19 10:50:53.797586 | orchestrator | 2025-06-19 10:50:51 | INFO  | Flavor SCS-1V-8 created 2025-06-19 10:50:53.797597 | orchestrator | 2025-06-19 10:50:51 | INFO  | Flavor SCS-2V-16 created 2025-06-19 10:50:53.797608 | orchestrator | 2025-06-19 10:50:51 | INFO  | Flavor SCS-4V-32 created 2025-06-19 10:50:53.797619 | orchestrator | 2025-06-19 10:50:51 | INFO  | Flavor SCS-1L-1 created 2025-06-19 10:50:53.797630 | orchestrator | 2025-06-19 10:50:51 | INFO  | Flavor SCS-2V-4-20s created 2025-06-19 10:50:53.797640 | orchestrator | 2025-06-19 10:50:51 | INFO  | Flavor SCS-4V-16-100s created 2025-06-19 10:50:53.797651 | orchestrator | 2025-06-19 10:50:51 | INFO  | Flavor SCS-1V-4-10 created 2025-06-19 10:50:53.797662 | orchestrator | 2025-06-19 10:50:52 | INFO  | Flavor SCS-2V-8-20 created 2025-06-19 10:50:53.797673 | orchestrator | 2025-06-19 10:50:52 | INFO  | Flavor SCS-4V-16-50 created 2025-06-19 10:50:53.797684 | orchestrator | 2025-06-19 10:50:52 | INFO  | Flavor SCS-8V-32-100 created 2025-06-19 10:50:53.797695 | orchestrator | 2025-06-19 10:50:52 | INFO  | Flavor SCS-1V-2-5 created 2025-06-19 10:50:53.797706 | orchestrator | 2025-06-19 10:50:52 | INFO  | Flavor SCS-2V-4-10 created 2025-06-19 10:50:53.797717 | orchestrator | 2025-06-19 10:50:52 | INFO  | Flavor SCS-4V-8-20 created 2025-06-19 10:50:53.797728 | orchestrator | 2025-06-19 10:50:52 | INFO  | Flavor SCS-8V-16-50 created 2025-06-19 10:50:53.797739 | orchestrator | 2025-06-19 10:50:52 | INFO  | Flavor SCS-16V-32-100 created 2025-06-19 10:50:53.797750 | orchestrator | 2025-06-19 10:50:53 | INFO  | Flavor SCS-1V-8-20 created 2025-06-19 10:50:53.797761 | orchestrator | 2025-06-19 10:50:53 | INFO  | Flavor SCS-2V-16-50 created 2025-06-19 10:50:53.797772 | orchestrator | 2025-06-19 10:50:53 | INFO  | Flavor SCS-4V-32-100 created 2025-06-19 10:50:53.797783 | orchestrator | 2025-06-19 10:50:53 | INFO  | Flavor SCS-1L-1-5 created 2025-06-19 10:50:55.783824 | orchestrator | 2025-06-19 10:50:55 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-19 10:50:55.788650 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:50:55.788683 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:50:55.788728 | orchestrator | Registering Redlock._release_script 2025-06-19 10:50:55.861996 | orchestrator | 2025-06-19 10:50:55 | INFO  | Task 3f355435-35b0-499e-a479-9570d2d3af1c (bootstrap-basic) was prepared for execution. 2025-06-19 10:50:55.862133 | orchestrator | 2025-06-19 10:50:55 | INFO  | It takes a moment until task 3f355435-35b0-499e-a479-9570d2d3af1c (bootstrap-basic) has been started and output is visible here. 2025-06-19 10:51:52.385935 | orchestrator | 2025-06-19 10:51:52.386104 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-19 10:51:52.386125 | orchestrator | 2025-06-19 10:51:52.386138 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-19 10:51:52.386168 | orchestrator | Thursday 19 June 2025 10:50:59 +0000 (0:00:00.069) 0:00:00.069 ********* 2025-06-19 10:51:52.386191 | orchestrator | ok: [localhost] 2025-06-19 10:51:52.386231 | orchestrator | 2025-06-19 10:51:52.386243 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-19 10:51:52.386254 | orchestrator | Thursday 19 June 2025 10:51:01 +0000 (0:00:01.736) 0:00:01.806 ********* 2025-06-19 10:51:52.386265 | orchestrator | ok: [localhost] 2025-06-19 10:51:52.386276 | orchestrator | 2025-06-19 10:51:52.386287 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-19 10:51:52.386298 | orchestrator | Thursday 19 June 2025 10:51:08 +0000 (0:00:07.528) 0:00:09.335 ********* 2025-06-19 10:51:52.386309 | orchestrator | changed: [localhost] 2025-06-19 10:51:52.386320 | orchestrator | 2025-06-19 10:51:52.386332 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-19 10:51:52.386344 | orchestrator | Thursday 19 June 2025 10:51:15 +0000 (0:00:06.928) 0:00:16.263 ********* 2025-06-19 10:51:52.386355 | orchestrator | ok: [localhost] 2025-06-19 10:51:52.386365 | orchestrator | 2025-06-19 10:51:52.386381 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-19 10:51:52.386392 | orchestrator | Thursday 19 June 2025 10:51:22 +0000 (0:00:06.684) 0:00:22.948 ********* 2025-06-19 10:51:52.386403 | orchestrator | changed: [localhost] 2025-06-19 10:51:52.386413 | orchestrator | 2025-06-19 10:51:52.386425 | orchestrator | TASK [Create public network] *************************************************** 2025-06-19 10:51:52.386436 | orchestrator | Thursday 19 June 2025 10:51:29 +0000 (0:00:07.274) 0:00:30.222 ********* 2025-06-19 10:51:52.386447 | orchestrator | changed: [localhost] 2025-06-19 10:51:52.386459 | orchestrator | 2025-06-19 10:51:52.386482 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-19 10:51:52.386494 | orchestrator | Thursday 19 June 2025 10:51:34 +0000 (0:00:04.879) 0:00:35.102 ********* 2025-06-19 10:51:52.386507 | orchestrator | changed: [localhost] 2025-06-19 10:51:52.386519 | orchestrator | 2025-06-19 10:51:52.386531 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-19 10:51:52.386543 | orchestrator | Thursday 19 June 2025 10:51:40 +0000 (0:00:06.088) 0:00:41.191 ********* 2025-06-19 10:51:52.386555 | orchestrator | changed: [localhost] 2025-06-19 10:51:52.386567 | orchestrator | 2025-06-19 10:51:52.386579 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-19 10:51:52.386591 | orchestrator | Thursday 19 June 2025 10:51:44 +0000 (0:00:04.097) 0:00:45.288 ********* 2025-06-19 10:51:52.386603 | orchestrator | changed: [localhost] 2025-06-19 10:51:52.386615 | orchestrator | 2025-06-19 10:51:52.386627 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-19 10:51:52.386640 | orchestrator | Thursday 19 June 2025 10:51:48 +0000 (0:00:03.766) 0:00:49.055 ********* 2025-06-19 10:51:52.386651 | orchestrator | ok: [localhost] 2025-06-19 10:51:52.386663 | orchestrator | 2025-06-19 10:51:52.386676 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:51:52.386689 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 10:51:52.386702 | orchestrator | 2025-06-19 10:51:52.386714 | orchestrator | 2025-06-19 10:51:52.386753 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:51:52.386765 | orchestrator | Thursday 19 June 2025 10:51:52 +0000 (0:00:03.497) 0:00:52.552 ********* 2025-06-19 10:51:52.386777 | orchestrator | =============================================================================== 2025-06-19 10:51:52.386791 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.53s 2025-06-19 10:51:52.386803 | orchestrator | Create volume type local ------------------------------------------------ 7.27s 2025-06-19 10:51:52.386815 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.93s 2025-06-19 10:51:52.386827 | orchestrator | Get volume type local --------------------------------------------------- 6.68s 2025-06-19 10:51:52.386839 | orchestrator | Set public network to default ------------------------------------------- 6.09s 2025-06-19 10:51:52.386850 | orchestrator | Create public network --------------------------------------------------- 4.88s 2025-06-19 10:51:52.386861 | orchestrator | Create public subnet ---------------------------------------------------- 4.10s 2025-06-19 10:51:52.386871 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.77s 2025-06-19 10:51:52.386882 | orchestrator | Create manager role ----------------------------------------------------- 3.50s 2025-06-19 10:51:52.386893 | orchestrator | Gathering Facts --------------------------------------------------------- 1.74s 2025-06-19 10:51:54.428357 | orchestrator | 2025-06-19 10:51:54 | INFO  | It takes a moment until task 9204bede-56f6-44b0-a438-48a12d31d1e3 (image-manager) has been started and output is visible here. 2025-06-19 10:52:35.288426 | orchestrator | 2025-06-19 10:51:57 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-19 10:52:35.288551 | orchestrator | 2025-06-19 10:51:58 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-19 10:52:35.288577 | orchestrator | 2025-06-19 10:51:58 | INFO  | Importing image Cirros 0.6.2 2025-06-19 10:52:35.288596 | orchestrator | 2025-06-19 10:51:58 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-19 10:52:35.288616 | orchestrator | 2025-06-19 10:51:59 | INFO  | Waiting for image to leave queued state... 2025-06-19 10:52:35.288635 | orchestrator | 2025-06-19 10:52:01 | INFO  | Waiting for import to complete... 2025-06-19 10:52:35.288654 | orchestrator | 2025-06-19 10:52:11 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-19 10:52:35.288677 | orchestrator | 2025-06-19 10:52:12 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-19 10:52:35.288695 | orchestrator | 2025-06-19 10:52:12 | INFO  | Setting internal_version = 0.6.2 2025-06-19 10:52:35.288715 | orchestrator | 2025-06-19 10:52:12 | INFO  | Setting image_original_user = cirros 2025-06-19 10:52:35.288735 | orchestrator | 2025-06-19 10:52:12 | INFO  | Adding tag os:cirros 2025-06-19 10:52:35.288754 | orchestrator | 2025-06-19 10:52:12 | INFO  | Setting property architecture: x86_64 2025-06-19 10:52:35.288782 | orchestrator | 2025-06-19 10:52:12 | INFO  | Setting property hw_disk_bus: scsi 2025-06-19 10:52:35.288803 | orchestrator | 2025-06-19 10:52:12 | INFO  | Setting property hw_rng_model: virtio 2025-06-19 10:52:35.288821 | orchestrator | 2025-06-19 10:52:13 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-19 10:52:35.288840 | orchestrator | 2025-06-19 10:52:13 | INFO  | Setting property hw_watchdog_action: reset 2025-06-19 10:52:35.288855 | orchestrator | 2025-06-19 10:52:13 | INFO  | Setting property hypervisor_type: qemu 2025-06-19 10:52:35.288866 | orchestrator | 2025-06-19 10:52:13 | INFO  | Setting property os_distro: cirros 2025-06-19 10:52:35.288891 | orchestrator | 2025-06-19 10:52:13 | INFO  | Setting property replace_frequency: never 2025-06-19 10:52:35.288927 | orchestrator | 2025-06-19 10:52:14 | INFO  | Setting property uuid_validity: none 2025-06-19 10:52:35.288940 | orchestrator | 2025-06-19 10:52:14 | INFO  | Setting property provided_until: none 2025-06-19 10:52:35.288956 | orchestrator | 2025-06-19 10:52:14 | INFO  | Setting property image_description: Cirros 2025-06-19 10:52:35.288968 | orchestrator | 2025-06-19 10:52:14 | INFO  | Setting property image_name: Cirros 2025-06-19 10:52:35.288981 | orchestrator | 2025-06-19 10:52:14 | INFO  | Setting property internal_version: 0.6.2 2025-06-19 10:52:35.288993 | orchestrator | 2025-06-19 10:52:15 | INFO  | Setting property image_original_user: cirros 2025-06-19 10:52:35.289005 | orchestrator | 2025-06-19 10:52:15 | INFO  | Setting property os_version: 0.6.2 2025-06-19 10:52:35.289017 | orchestrator | 2025-06-19 10:52:15 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-19 10:52:35.289030 | orchestrator | 2025-06-19 10:52:15 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-19 10:52:35.289042 | orchestrator | 2025-06-19 10:52:15 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-19 10:52:35.289054 | orchestrator | 2025-06-19 10:52:15 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-19 10:52:35.289066 | orchestrator | 2025-06-19 10:52:15 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-19 10:52:35.289078 | orchestrator | 2025-06-19 10:52:16 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-19 10:52:35.289090 | orchestrator | 2025-06-19 10:52:16 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-19 10:52:35.289101 | orchestrator | 2025-06-19 10:52:16 | INFO  | Importing image Cirros 0.6.3 2025-06-19 10:52:35.289113 | orchestrator | 2025-06-19 10:52:16 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-19 10:52:35.289126 | orchestrator | 2025-06-19 10:52:17 | INFO  | Waiting for image to leave queued state... 2025-06-19 10:52:35.289138 | orchestrator | 2025-06-19 10:52:19 | INFO  | Waiting for import to complete... 2025-06-19 10:52:35.289151 | orchestrator | 2025-06-19 10:52:29 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-19 10:52:35.289183 | orchestrator | 2025-06-19 10:52:30 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-19 10:52:35.289225 | orchestrator | 2025-06-19 10:52:30 | INFO  | Setting internal_version = 0.6.3 2025-06-19 10:52:35.289239 | orchestrator | 2025-06-19 10:52:30 | INFO  | Setting image_original_user = cirros 2025-06-19 10:52:35.289252 | orchestrator | 2025-06-19 10:52:30 | INFO  | Adding tag os:cirros 2025-06-19 10:52:35.289264 | orchestrator | 2025-06-19 10:52:30 | INFO  | Setting property architecture: x86_64 2025-06-19 10:52:35.289276 | orchestrator | 2025-06-19 10:52:30 | INFO  | Setting property hw_disk_bus: scsi 2025-06-19 10:52:35.289288 | orchestrator | 2025-06-19 10:52:30 | INFO  | Setting property hw_rng_model: virtio 2025-06-19 10:52:35.289299 | orchestrator | 2025-06-19 10:52:31 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-19 10:52:35.289310 | orchestrator | 2025-06-19 10:52:31 | INFO  | Setting property hw_watchdog_action: reset 2025-06-19 10:52:35.289320 | orchestrator | 2025-06-19 10:52:31 | INFO  | Setting property hypervisor_type: qemu 2025-06-19 10:52:35.289331 | orchestrator | 2025-06-19 10:52:31 | INFO  | Setting property os_distro: cirros 2025-06-19 10:52:35.289342 | orchestrator | 2025-06-19 10:52:32 | INFO  | Setting property replace_frequency: never 2025-06-19 10:52:35.289384 | orchestrator | 2025-06-19 10:52:32 | INFO  | Setting property uuid_validity: none 2025-06-19 10:52:35.289402 | orchestrator | 2025-06-19 10:52:32 | INFO  | Setting property provided_until: none 2025-06-19 10:52:35.289427 | orchestrator | 2025-06-19 10:52:32 | INFO  | Setting property image_description: Cirros 2025-06-19 10:52:35.289450 | orchestrator | 2025-06-19 10:52:32 | INFO  | Setting property image_name: Cirros 2025-06-19 10:52:35.289469 | orchestrator | 2025-06-19 10:52:33 | INFO  | Setting property internal_version: 0.6.3 2025-06-19 10:52:35.289488 | orchestrator | 2025-06-19 10:52:33 | INFO  | Setting property image_original_user: cirros 2025-06-19 10:52:35.289516 | orchestrator | 2025-06-19 10:52:33 | INFO  | Setting property os_version: 0.6.3 2025-06-19 10:52:35.289532 | orchestrator | 2025-06-19 10:52:33 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-19 10:52:35.289543 | orchestrator | 2025-06-19 10:52:33 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-19 10:52:35.289554 | orchestrator | 2025-06-19 10:52:34 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-19 10:52:35.289564 | orchestrator | 2025-06-19 10:52:34 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-19 10:52:35.289575 | orchestrator | 2025-06-19 10:52:34 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-19 10:52:35.538889 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-19 10:52:37.750501 | orchestrator | 2025-06-19 10:52:37 | INFO  | date: 2025-06-19 2025-06-19 10:52:37.750591 | orchestrator | 2025-06-19 10:52:37 | INFO  | image: octavia-amphora-haproxy-2024.2.20250619.qcow2 2025-06-19 10:52:37.750697 | orchestrator | 2025-06-19 10:52:37 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250619.qcow2 2025-06-19 10:52:37.750731 | orchestrator | 2025-06-19 10:52:37 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250619.qcow2.CHECKSUM 2025-06-19 10:52:37.797318 | orchestrator | 2025-06-19 10:52:37 | INFO  | checksum: b4f7459814f7acd0a327e73541418ded77d9f589b7c07199a86837b9395254c6 2025-06-19 10:52:37.866259 | orchestrator | 2025-06-19 10:52:37 | INFO  | It takes a moment until task fb633d0a-79b9-4b67-a7c7-f94925bfcd91 (image-manager) has been started and output is visible here. 2025-06-19 10:53:38.922119 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-19 10:53:38.922305 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-19 10:53:38.922326 | orchestrator | 2025-06-19 10:52:40 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-19' 2025-06-19 10:53:38.922342 | orchestrator | 2025-06-19 10:52:40 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250619.qcow2: 200 2025-06-19 10:53:38.922356 | orchestrator | 2025-06-19 10:52:40 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-19 2025-06-19 10:53:38.922367 | orchestrator | 2025-06-19 10:52:40 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250619.qcow2 2025-06-19 10:53:38.922403 | orchestrator | 2025-06-19 10:52:41 | INFO  | Waiting for image to leave queued state... 2025-06-19 10:53:38.922416 | orchestrator | 2025-06-19 10:52:43 | INFO  | Waiting for import to complete... 2025-06-19 10:53:38.922427 | orchestrator | 2025-06-19 10:52:53 | INFO  | Waiting for import to complete... 2025-06-19 10:53:38.922438 | orchestrator | 2025-06-19 10:53:03 | INFO  | Waiting for import to complete... 2025-06-19 10:53:38.922448 | orchestrator | 2025-06-19 10:53:13 | INFO  | Waiting for import to complete... 2025-06-19 10:53:38.922459 | orchestrator | 2025-06-19 10:53:24 | INFO  | Waiting for import to complete... 2025-06-19 10:53:38.922480 | orchestrator | 2025-06-19 10:53:34 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-19' successfully completed, reloading images 2025-06-19 10:53:38.922493 | orchestrator | 2025-06-19 10:53:34 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-19' 2025-06-19 10:53:38.922504 | orchestrator | 2025-06-19 10:53:34 | INFO  | Setting internal_version = 2025-06-19 2025-06-19 10:53:38.922515 | orchestrator | 2025-06-19 10:53:34 | INFO  | Setting image_original_user = ubuntu 2025-06-19 10:53:38.922525 | orchestrator | 2025-06-19 10:53:34 | INFO  | Adding tag amphora 2025-06-19 10:53:38.922536 | orchestrator | 2025-06-19 10:53:34 | INFO  | Adding tag os:ubuntu 2025-06-19 10:53:38.922547 | orchestrator | 2025-06-19 10:53:34 | INFO  | Setting property architecture: x86_64 2025-06-19 10:53:38.922558 | orchestrator | 2025-06-19 10:53:35 | INFO  | Setting property hw_disk_bus: scsi 2025-06-19 10:53:38.922568 | orchestrator | 2025-06-19 10:53:35 | INFO  | Setting property hw_rng_model: virtio 2025-06-19 10:53:38.922579 | orchestrator | 2025-06-19 10:53:35 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-19 10:53:38.922591 | orchestrator | 2025-06-19 10:53:35 | INFO  | Setting property hw_watchdog_action: reset 2025-06-19 10:53:38.922603 | orchestrator | 2025-06-19 10:53:36 | INFO  | Setting property hypervisor_type: qemu 2025-06-19 10:53:38.922616 | orchestrator | 2025-06-19 10:53:36 | INFO  | Setting property os_distro: ubuntu 2025-06-19 10:53:38.922627 | orchestrator | 2025-06-19 10:53:36 | INFO  | Setting property replace_frequency: quarterly 2025-06-19 10:53:38.922640 | orchestrator | 2025-06-19 10:53:36 | INFO  | Setting property uuid_validity: last-1 2025-06-19 10:53:38.922652 | orchestrator | 2025-06-19 10:53:36 | INFO  | Setting property provided_until: none 2025-06-19 10:53:38.922665 | orchestrator | 2025-06-19 10:53:37 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-19 10:53:38.922677 | orchestrator | 2025-06-19 10:53:37 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-19 10:53:38.922689 | orchestrator | 2025-06-19 10:53:37 | INFO  | Setting property internal_version: 2025-06-19 2025-06-19 10:53:38.922702 | orchestrator | 2025-06-19 10:53:37 | INFO  | Setting property image_original_user: ubuntu 2025-06-19 10:53:38.922714 | orchestrator | 2025-06-19 10:53:37 | INFO  | Setting property os_version: 2025-06-19 2025-06-19 10:53:38.922728 | orchestrator | 2025-06-19 10:53:38 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250619.qcow2 2025-06-19 10:53:38.922759 | orchestrator | 2025-06-19 10:53:38 | INFO  | Setting property image_build_date: 2025-06-19 2025-06-19 10:53:38.922771 | orchestrator | 2025-06-19 10:53:38 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-19' 2025-06-19 10:53:38.922792 | orchestrator | 2025-06-19 10:53:38 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-19' 2025-06-19 10:53:38.922824 | orchestrator | 2025-06-19 10:53:38 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-19 10:53:38.922837 | orchestrator | 2025-06-19 10:53:38 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-19 10:53:38.922851 | orchestrator | 2025-06-19 10:53:38 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-19 10:53:38.922863 | orchestrator | 2025-06-19 10:53:38 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-19 10:53:39.432257 | orchestrator | ok: Runtime: 0:02:57.049544 2025-06-19 10:53:39.459194 | 2025-06-19 10:53:39.459399 | TASK [Run checks] 2025-06-19 10:53:40.155871 | orchestrator | + set -e 2025-06-19 10:53:40.156073 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-19 10:53:40.156096 | orchestrator | ++ export INTERACTIVE=false 2025-06-19 10:53:40.156118 | orchestrator | ++ INTERACTIVE=false 2025-06-19 10:53:40.156131 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-19 10:53:40.156143 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-19 10:53:40.156157 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-19 10:53:40.156746 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-19 10:53:40.162114 | orchestrator | 2025-06-19 10:53:40.162243 | orchestrator | # CHECK 2025-06-19 10:53:40.162270 | orchestrator | 2025-06-19 10:53:40.162293 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-19 10:53:40.162321 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-19 10:53:40.162343 | orchestrator | + echo 2025-06-19 10:53:40.162364 | orchestrator | + echo '# CHECK' 2025-06-19 10:53:40.162384 | orchestrator | + echo 2025-06-19 10:53:40.162411 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-19 10:53:40.162883 | orchestrator | ++ semver latest 5.0.0 2025-06-19 10:53:40.220331 | orchestrator | 2025-06-19 10:53:40.220444 | orchestrator | ## Containers @ testbed-manager 2025-06-19 10:53:40.220469 | orchestrator | 2025-06-19 10:53:40.220501 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-19 10:53:40.220515 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-19 10:53:40.220530 | orchestrator | + echo 2025-06-19 10:53:40.220546 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-19 10:53:40.220561 | orchestrator | + echo 2025-06-19 10:53:40.220575 | orchestrator | + osism container testbed-manager ps 2025-06-19 10:53:42.240671 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-19 10:53:42.240799 | orchestrator | 52bc2a8e1b70 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-06-19 10:53:42.240838 | orchestrator | 67622a8b4ba1 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-06-19 10:53:42.240858 | orchestrator | d9fb002a6326 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-19 10:53:42.240870 | orchestrator | d69fd8959e3b registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-19 10:53:42.240881 | orchestrator | 9e93b26b778d registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_server 2025-06-19 10:53:42.240898 | orchestrator | 0186e656254b registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 16 minutes cephclient 2025-06-19 10:53:42.240910 | orchestrator | 80895396543b registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-19 10:53:42.240921 | orchestrator | 7c555363d595 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-19 10:53:42.240932 | orchestrator | cec16557762d registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-19 10:53:42.240970 | orchestrator | 4117dbbad158 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 29 minutes (healthy) 80/tcp phpmyadmin 2025-06-19 10:53:42.240981 | orchestrator | ef57481d539f registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 30 minutes ago Up 30 minutes openstackclient 2025-06-19 10:53:42.240993 | orchestrator | 710915788a32 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 31 minutes ago Up 30 minutes (healthy) 8080/tcp homer 2025-06-19 10:53:42.241003 | orchestrator | caf2b039f1e1 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 38 minutes ago Up 37 minutes (healthy) osism-ansible 2025-06-19 10:53:42.241014 | orchestrator | 896573a49f14 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 51 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-19 10:53:42.241031 | orchestrator | 6f1d48886cae registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 55 minutes ago Up 37 minutes (healthy) manager-inventory_reconciler-1 2025-06-19 10:53:42.241064 | orchestrator | 558e1da45d13 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 55 minutes ago Up 37 minutes (healthy) ceph-ansible 2025-06-19 10:53:42.241076 | orchestrator | 89c67bac2bb5 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 55 minutes ago Up 37 minutes (healthy) kolla-ansible 2025-06-19 10:53:42.241087 | orchestrator | c1069ba24fe4 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 55 minutes ago Up 37 minutes (healthy) osism-kubernetes 2025-06-19 10:53:42.241098 | orchestrator | 385b7ef73186 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 55 minutes ago Up 37 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-19 10:53:42.241108 | orchestrator | 198ec1133ac4 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-beat-1 2025-06-19 10:53:42.241119 | orchestrator | 0c8b4d6ce405 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-listener-1 2025-06-19 10:53:42.241130 | orchestrator | bd257f55ab61 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-openstack-1 2025-06-19 10:53:42.241141 | orchestrator | 7a9b03a7b41a registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-flower-1 2025-06-19 10:53:42.241152 | orchestrator | 76b3358fa26b registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 55 minutes ago Up 38 minutes (healthy) osismclient 2025-06-19 10:53:42.241171 | orchestrator | 475ee7b8d15c registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 55 minutes ago Up 38 minutes (healthy) 6379/tcp manager-redis-1 2025-06-19 10:53:42.241207 | orchestrator | 36e5f4fef263 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 55 minutes ago Up 38 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-19 10:53:42.241221 | orchestrator | 773747be4471 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-19 10:53:42.241231 | orchestrator | 428d73652310 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-19 10:53:42.477450 | orchestrator | 2025-06-19 10:53:42.477570 | orchestrator | ## Images @ testbed-manager 2025-06-19 10:53:42.477593 | orchestrator | 2025-06-19 10:53:42.477611 | orchestrator | + echo 2025-06-19 10:53:42.477629 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-19 10:53:42.477646 | orchestrator | + echo 2025-06-19 10:53:42.477662 | orchestrator | + osism container testbed-manager images 2025-06-19 10:53:44.448173 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-19 10:53:44.448318 | orchestrator | registry.osism.tech/osism/osism-ansible latest 3d56761a4571 40 minutes ago 577MB 2025-06-19 10:53:44.448335 | orchestrator | registry.osism.tech/osism/osism latest 5f13386a851e About an hour ago 312MB 2025-06-19 10:53:44.448347 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 855a792b4638 2 hours ago 1.21GB 2025-06-19 10:53:44.448358 | orchestrator | registry.osism.tech/osism/homer v25.05.2 dcc0765d415b 7 hours ago 11.5MB 2025-06-19 10:53:44.448369 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 cfffd5b6405d 7 hours ago 226MB 2025-06-19 10:53:44.448380 | orchestrator | registry.osism.tech/osism/cephclient reef fe13ec91ecad 7 hours ago 453MB 2025-06-19 10:53:44.448390 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 9bd39ba00847 9 hours ago 628MB 2025-06-19 10:53:44.448401 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 b5eb771b609e 9 hours ago 746MB 2025-06-19 10:53:44.448433 | orchestrator | registry.osism.tech/kolla/cron 2024.2 2ef0eaede3d9 9 hours ago 318MB 2025-06-19 10:53:44.448444 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 b9003cc00d54 9 hours ago 358MB 2025-06-19 10:53:44.448454 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 36b2f03ef43f 9 hours ago 410MB 2025-06-19 10:53:44.448465 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 84e61375a513 9 hours ago 456MB 2025-06-19 10:53:44.448475 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 cd32a0e0d73f 9 hours ago 891MB 2025-06-19 10:53:44.448486 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 4b17d86e904b 9 hours ago 360MB 2025-06-19 10:53:44.448496 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 c75db7559991 11 hours ago 574MB 2025-06-19 10:53:44.448507 | orchestrator | registry.osism.tech/osism/osism-ansible 4b9a451d21b5 11 hours ago 577MB 2025-06-19 10:53:44.448517 | orchestrator | registry.osism.tech/osism/ceph-ansible reef c52576f60b24 11 hours ago 537MB 2025-06-19 10:53:44.448549 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest c2922c633028 11 hours ago 310MB 2025-06-19 10:53:44.448561 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 2 weeks ago 41.4MB 2025-06-19 10:53:44.448572 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 3 weeks ago 224MB 2025-06-19 10:53:44.448582 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 4 months ago 328MB 2025-06-19 10:53:44.448593 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-19 10:53:44.448603 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-19 10:53:44.448614 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 12 months ago 146MB 2025-06-19 10:53:44.715818 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-19 10:53:44.716132 | orchestrator | ++ semver latest 5.0.0 2025-06-19 10:53:44.773062 | orchestrator | 2025-06-19 10:53:44.773999 | orchestrator | ## Containers @ testbed-node-0 2025-06-19 10:53:44.774090 | orchestrator | 2025-06-19 10:53:44.774103 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-19 10:53:44.774114 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-19 10:53:44.774125 | orchestrator | + echo 2025-06-19 10:53:44.774136 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-19 10:53:44.774148 | orchestrator | + echo 2025-06-19 10:53:44.774159 | orchestrator | + osism container testbed-node-0 ps 2025-06-19 10:53:46.899839 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-19 10:53:46.899964 | orchestrator | abcc5100221f registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-19 10:53:46.899985 | orchestrator | 8d4a63394367 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-19 10:53:46.899998 | orchestrator | f6b9d11e23a9 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-19 10:53:46.900009 | orchestrator | 49d679c2ae96 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-19 10:53:46.900020 | orchestrator | c1a084e09254 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-19 10:53:46.900031 | orchestrator | da4a0aa521e3 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-19 10:53:46.900042 | orchestrator | 526340b35fd6 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-19 10:53:46.900053 | orchestrator | 75c27f692529 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-19 10:53:46.900064 | orchestrator | 754c6a6c52e9 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-19 10:53:46.900075 | orchestrator | 42c63667ed72 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-19 10:53:46.900102 | orchestrator | 6e05682eaa05 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-19 10:53:46.900133 | orchestrator | d72231a2de75 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-19 10:53:46.900144 | orchestrator | d7d8528bb53e registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-19 10:53:46.900155 | orchestrator | 536cc5ab4384 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-19 10:53:46.900165 | orchestrator | 098f0c55aa6a registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-19 10:53:46.900176 | orchestrator | 661d7a4693f0 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-06-19 10:53:46.900219 | orchestrator | 3db8d54af9b1 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-19 10:53:46.900268 | orchestrator | 0fae4b0f55d9 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-19 10:53:46.900281 | orchestrator | 6b1f693baf8e registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-19 10:53:46.900292 | orchestrator | fc3749ea0561 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-19 10:53:46.900303 | orchestrator | 8456c67dd54d registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-19 10:53:46.900336 | orchestrator | 9f10978a2bc2 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-19 10:53:46.900347 | orchestrator | 6ee6739e9340 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-19 10:53:46.900358 | orchestrator | e97cf7765fa2 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-19 10:53:46.900368 | orchestrator | 9c4be8a14f49 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-19 10:53:46.900379 | orchestrator | e2308d65100d registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-06-19 10:53:46.900390 | orchestrator | 30fb18e34658 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-19 10:53:46.900406 | orchestrator | 5cd50eb876a3 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-19 10:53:46.900417 | orchestrator | ab1f35b97f4c registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-19 10:53:46.900427 | orchestrator | cddf997a87f0 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-19 10:53:46.900460 | orchestrator | e8f8cecb3ff3 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-19 10:53:46.900477 | orchestrator | fbff3cc7b52d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-06-19 10:53:46.900499 | orchestrator | 9cdb875a6b5d registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-19 10:53:46.900510 | orchestrator | ea401118bdc7 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-06-19 10:53:46.900520 | orchestrator | 45638007188f registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-19 10:53:46.900530 | orchestrator | d2699c302f13 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) horizon 2025-06-19 10:53:46.900541 | orchestrator | 68333c147c1e registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 18 minutes ago Up 18 minutes (healthy) mariadb 2025-06-19 10:53:46.900552 | orchestrator | 89151b137186 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-19 10:53:46.900562 | orchestrator | adcf46493236 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-06-19 10:53:46.900573 | orchestrator | 003ea78a68ad registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes keepalived 2025-06-19 10:53:46.900583 | orchestrator | b055dee8ff84 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-19 10:53:46.900594 | orchestrator | 35662922159e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-0 2025-06-19 10:53:46.900604 | orchestrator | 8ea590e549a0 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-19 10:53:46.900615 | orchestrator | 69b1fa83e92c registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_northd 2025-06-19 10:53:46.900645 | orchestrator | 112329c8a9af registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-06-19 10:53:46.900656 | orchestrator | ee6a90fbc582 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-06-19 10:53:46.900667 | orchestrator | 4f1901b004f1 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-06-19 10:53:46.900678 | orchestrator | c7181e2bbd59 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-0 2025-06-19 10:53:46.900688 | orchestrator | f7d3992747bb registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-19 10:53:46.900699 | orchestrator | 482961878df6 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-19 10:53:46.900716 | orchestrator | 4cbe5060acf9 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-06-19 10:53:46.900727 | orchestrator | a3ca38ed13bd registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-06-19 10:53:46.900738 | orchestrator | a8e17f45af84 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-06-19 10:53:46.900748 | orchestrator | 9063d327d9f1 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) memcached 2025-06-19 10:53:46.900759 | orchestrator | 89dc2763c0a4 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-19 10:53:46.900770 | orchestrator | 4184b750c45d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-06-19 10:53:46.900780 | orchestrator | c78cd079132b registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-19 10:53:47.132208 | orchestrator | 2025-06-19 10:53:47.132333 | orchestrator | ## Images @ testbed-node-0 2025-06-19 10:53:47.132359 | orchestrator | 2025-06-19 10:53:47.132380 | orchestrator | + echo 2025-06-19 10:53:47.132400 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-19 10:53:47.132419 | orchestrator | + echo 2025-06-19 10:53:47.132438 | orchestrator | + osism container testbed-node-0 images 2025-06-19 10:53:49.129765 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-19 10:53:49.129877 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 7f16f2edcdbe 8 hours ago 1.27GB 2025-06-19 10:53:49.129892 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a2be729d3919 9 hours ago 417MB 2025-06-19 10:53:49.129903 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 9bd39ba00847 9 hours ago 628MB 2025-06-19 10:53:49.129914 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 57408b87a880 9 hours ago 1.59GB 2025-06-19 10:53:49.129924 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 ce100d02455a 9 hours ago 1.55GB 2025-06-19 10:53:49.129935 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a6059dc263fb 9 hours ago 329MB 2025-06-19 10:53:49.129945 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 15167c71a27a 9 hours ago 1.01GB 2025-06-19 10:53:49.129956 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c79b10660dab 9 hours ago 375MB 2025-06-19 10:53:49.129966 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 69bb822e4c46 9 hours ago 326MB 2025-06-19 10:53:49.129976 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 b5eb771b609e 9 hours ago 746MB 2025-06-19 10:53:49.129988 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 986ab28569d2 9 hours ago 318MB 2025-06-19 10:53:49.129998 | orchestrator | registry.osism.tech/kolla/cron 2024.2 2ef0eaede3d9 9 hours ago 318MB 2025-06-19 10:53:49.130009 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 2ba5be517edf 9 hours ago 590MB 2025-06-19 10:53:49.130071 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 004b3542e240 9 hours ago 353MB 2025-06-19 10:53:49.130082 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b153d384e2e9 9 hours ago 351MB 2025-06-19 10:53:49.130093 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 b9003cc00d54 9 hours ago 358MB 2025-06-19 10:53:49.130127 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 36b2f03ef43f 9 hours ago 410MB 2025-06-19 10:53:49.130138 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 98e28e256541 9 hours ago 344MB 2025-06-19 10:53:49.130148 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 382fa572aeb0 9 hours ago 1.21GB 2025-06-19 10:53:49.130159 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 dcc2b16f119a 9 hours ago 361MB 2025-06-19 10:53:49.130169 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 c22fdcceadc5 9 hours ago 361MB 2025-06-19 10:53:49.130205 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 c9525efff366 9 hours ago 324MB 2025-06-19 10:53:49.130216 | orchestrator | registry.osism.tech/kolla/redis 2024.2 99d377c1725b 9 hours ago 324MB 2025-06-19 10:53:49.130227 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 4a0c9d8671fc 9 hours ago 1.41GB 2025-06-19 10:53:49.130237 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 a9b50070a7ce 9 hours ago 1.41GB 2025-06-19 10:53:49.130248 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5ef54d43211d 9 hours ago 1.31GB 2025-06-19 10:53:49.130273 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 506b859610dc 9 hours ago 1.2GB 2025-06-19 10:53:49.130284 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 b766d71e4c1e 9 hours ago 1.11GB 2025-06-19 10:53:49.130296 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 38291c09b3b9 9 hours ago 1.11GB 2025-06-19 10:53:49.130308 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 21e37512da66 9 hours ago 1.1GB 2025-06-19 10:53:49.130320 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 7c4aa4fc28f6 9 hours ago 1.12GB 2025-06-19 10:53:49.130332 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 a5bce9a2abbb 9 hours ago 1.1GB 2025-06-19 10:53:49.130344 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 146b987c62d9 9 hours ago 1.1GB 2025-06-19 10:53:49.130370 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 19c69bf11a7c 9 hours ago 1.12GB 2025-06-19 10:53:49.130383 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 67ceff32be42 9 hours ago 1.04GB 2025-06-19 10:53:49.130394 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 5a3b1099b159 9 hours ago 1.04GB 2025-06-19 10:53:49.130406 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 528cc98a82d0 9 hours ago 1.04GB 2025-06-19 10:53:49.130436 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 50f7566bbb15 9 hours ago 1.04GB 2025-06-19 10:53:49.130449 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 7f11c4464820 9 hours ago 1.15GB 2025-06-19 10:53:49.130461 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 e9b0ecda36f6 9 hours ago 1.05GB 2025-06-19 10:53:49.130473 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 110d6dcc33ba 9 hours ago 1.05GB 2025-06-19 10:53:49.130485 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 aae74116f1d4 9 hours ago 1.05GB 2025-06-19 10:53:49.130497 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 c48b45c02968 9 hours ago 1.06GB 2025-06-19 10:53:49.130508 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 68bdaf277975 9 hours ago 1.06GB 2025-06-19 10:53:49.130521 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 41f20514d1dc 9 hours ago 1.05GB 2025-06-19 10:53:49.130533 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 7a683b4a54db 9 hours ago 1.42GB 2025-06-19 10:53:49.130552 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 b0ac03972cd3 9 hours ago 1.29GB 2025-06-19 10:53:49.130564 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 27cc063ad1ee 9 hours ago 1.29GB 2025-06-19 10:53:49.130577 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 3961faeea082 9 hours ago 1.29GB 2025-06-19 10:53:49.130589 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 04d917498625 9 hours ago 1.06GB 2025-06-19 10:53:49.130602 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 195e35cf940a 9 hours ago 1.06GB 2025-06-19 10:53:49.130614 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 16b183490e42 9 hours ago 1.06GB 2025-06-19 10:53:49.130627 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 91517ab27a46 9 hours ago 1.11GB 2025-06-19 10:53:49.130639 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 25034b238f44 9 hours ago 1.11GB 2025-06-19 10:53:49.130650 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f9a01fc47d6f 9 hours ago 1.13GB 2025-06-19 10:53:49.130667 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 e2a7d7007292 9 hours ago 1.04GB 2025-06-19 10:53:49.130678 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 2ca6f05b770b 9 hours ago 1.04GB 2025-06-19 10:53:49.130688 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 cc72cdcb3ea1 9 hours ago 1.24GB 2025-06-19 10:53:49.130699 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c69e2336744f 9 hours ago 1.04GB 2025-06-19 10:53:49.130709 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 41bfb033bd2c 9 hours ago 946MB 2025-06-19 10:53:49.130720 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 a86a91d5e3e6 9 hours ago 947MB 2025-06-19 10:53:49.130730 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 342b8ebe5f79 9 hours ago 946MB 2025-06-19 10:53:49.130741 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 7261603108a3 9 hours ago 947MB 2025-06-19 10:53:49.350449 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-19 10:53:49.351397 | orchestrator | ++ semver latest 5.0.0 2025-06-19 10:53:49.419546 | orchestrator | 2025-06-19 10:53:49.419654 | orchestrator | ## Containers @ testbed-node-1 2025-06-19 10:53:49.419669 | orchestrator | 2025-06-19 10:53:49.419681 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-19 10:53:49.419693 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-19 10:53:49.419703 | orchestrator | + echo 2025-06-19 10:53:49.419714 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-19 10:53:49.419726 | orchestrator | + echo 2025-06-19 10:53:49.419736 | orchestrator | + osism container testbed-node-1 ps 2025-06-19 10:53:51.528586 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-19 10:53:51.528706 | orchestrator | 9c55bd69ec09 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-19 10:53:51.528732 | orchestrator | 248352f6d3cd registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-19 10:53:51.528752 | orchestrator | 2e9234b5ec77 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-19 10:53:51.528771 | orchestrator | 0aa7cbae7270 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-19 10:53:51.528790 | orchestrator | 54b7b0a325f0 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-19 10:53:51.528839 | orchestrator | aace3bcd43f7 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-19 10:53:51.528852 | orchestrator | 1006b211ced6 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-19 10:53:51.528863 | orchestrator | 1fa95a619344 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-19 10:53:51.528874 | orchestrator | 6be59087b555 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-19 10:53:51.528884 | orchestrator | 59e32176ab99 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-19 10:53:51.528894 | orchestrator | df3e858fc5cd registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-19 10:53:51.528905 | orchestrator | 25ff7eb1d3f2 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-19 10:53:51.528926 | orchestrator | 88475e7ace10 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-19 10:53:51.528937 | orchestrator | 2a241efac903 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-19 10:53:51.528952 | orchestrator | 1a72df6bf48e registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-19 10:53:51.528963 | orchestrator | b51c4b512aec registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_api 2025-06-19 10:53:51.528974 | orchestrator | 78b03f0c7501 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-19 10:53:51.528985 | orchestrator | 50047fdb0992 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-19 10:53:51.528995 | orchestrator | 19927dba8553 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-19 10:53:51.529007 | orchestrator | af261c5f94d9 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-19 10:53:51.529017 | orchestrator | 5aa21c804440 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-19 10:53:51.529045 | orchestrator | 567b23963380 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-19 10:53:51.529057 | orchestrator | f8c4044c565b registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-19 10:53:51.529068 | orchestrator | f534f8df8866 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-19 10:53:51.529078 | orchestrator | dd04b1f8693d registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-19 10:53:51.529097 | orchestrator | 2dc8cb8842e5 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-06-19 10:53:51.529109 | orchestrator | 19d30a428ae2 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-19 10:53:51.529122 | orchestrator | ca91c043ca1c registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-19 10:53:51.529134 | orchestrator | 759076de1612 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-19 10:53:51.529147 | orchestrator | 7b590f40ea3f registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-19 10:53:51.529159 | orchestrator | 9e109503e58b registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-19 10:53:51.529171 | orchestrator | 5c75c3ff5dcf registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-06-19 10:53:51.529222 | orchestrator | eb48fcad26e0 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-19 10:53:51.529245 | orchestrator | 342247ccfd67 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-06-19 10:53:51.529274 | orchestrator | 2103428232c1 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2025-06-19 10:53:51.529287 | orchestrator | 41e321594af7 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 16 minutes (healthy) keystone_ssh 2025-06-19 10:53:51.529299 | orchestrator | d9473b1daa51 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-06-19 10:53:51.529311 | orchestrator | bdd8caac7f84 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-19 10:53:51.529323 | orchestrator | 4a347fc69ade registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-06-19 10:53:51.529334 | orchestrator | e46235a8d1bd registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-19 10:53:51.529344 | orchestrator | aed97145dda4 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-19 10:53:51.529355 | orchestrator | 515d02800458 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-1 2025-06-19 10:53:51.529365 | orchestrator | a9b530f13717 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-19 10:53:51.529376 | orchestrator | a2352ec2ff6e registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-06-19 10:53:51.529401 | orchestrator | 89a6472bed58 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_sb_db 2025-06-19 10:53:51.529413 | orchestrator | afaa5158de26 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-06-19 10:53:51.529423 | orchestrator | 8f920bc5abe1 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_controller 2025-06-19 10:53:51.529434 | orchestrator | 5dd8917e6561 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) rabbitmq 2025-06-19 10:53:51.529444 | orchestrator | 9033b53796cd registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-1 2025-06-19 10:53:51.529454 | orchestrator | 7e9673345cc0 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-19 10:53:51.529465 | orchestrator | 54cfbfbc7f39 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-06-19 10:53:51.529475 | orchestrator | 6076e007d9c5 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-06-19 10:53:51.529486 | orchestrator | 51b003753280 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-06-19 10:53:51.529496 | orchestrator | d067dbf3afd0 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 27 minutes (healthy) memcached 2025-06-19 10:53:51.529507 | orchestrator | a16392c36942 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-19 10:53:51.529517 | orchestrator | a533d92f26ea registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-19 10:53:51.529527 | orchestrator | 582b6ea61cff registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-19 10:53:51.746834 | orchestrator | 2025-06-19 10:53:51.746954 | orchestrator | ## Images @ testbed-node-1 2025-06-19 10:53:51.746979 | orchestrator | 2025-06-19 10:53:51.746997 | orchestrator | + echo 2025-06-19 10:53:51.747018 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-19 10:53:51.747038 | orchestrator | + echo 2025-06-19 10:53:51.747057 | orchestrator | + osism container testbed-node-1 images 2025-06-19 10:53:53.735158 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-19 10:53:53.735320 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 7f16f2edcdbe 8 hours ago 1.27GB 2025-06-19 10:53:53.735335 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a2be729d3919 9 hours ago 417MB 2025-06-19 10:53:53.735367 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 9bd39ba00847 9 hours ago 628MB 2025-06-19 10:53:53.735378 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 57408b87a880 9 hours ago 1.59GB 2025-06-19 10:53:53.735389 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 ce100d02455a 9 hours ago 1.55GB 2025-06-19 10:53:53.735400 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a6059dc263fb 9 hours ago 329MB 2025-06-19 10:53:53.735411 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 15167c71a27a 9 hours ago 1.01GB 2025-06-19 10:53:53.735440 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c79b10660dab 9 hours ago 375MB 2025-06-19 10:53:53.735452 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 69bb822e4c46 9 hours ago 326MB 2025-06-19 10:53:53.735463 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 b5eb771b609e 9 hours ago 746MB 2025-06-19 10:53:53.735473 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 986ab28569d2 9 hours ago 318MB 2025-06-19 10:53:53.735484 | orchestrator | registry.osism.tech/kolla/cron 2024.2 2ef0eaede3d9 9 hours ago 318MB 2025-06-19 10:53:53.735494 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 2ba5be517edf 9 hours ago 590MB 2025-06-19 10:53:53.735504 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 004b3542e240 9 hours ago 353MB 2025-06-19 10:53:53.735515 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b153d384e2e9 9 hours ago 351MB 2025-06-19 10:53:53.735525 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 b9003cc00d54 9 hours ago 358MB 2025-06-19 10:53:53.735535 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 36b2f03ef43f 9 hours ago 410MB 2025-06-19 10:53:53.735545 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 98e28e256541 9 hours ago 344MB 2025-06-19 10:53:53.735556 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 382fa572aeb0 9 hours ago 1.21GB 2025-06-19 10:53:53.735566 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 dcc2b16f119a 9 hours ago 361MB 2025-06-19 10:53:53.735576 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 c22fdcceadc5 9 hours ago 361MB 2025-06-19 10:53:53.735587 | orchestrator | registry.osism.tech/kolla/redis 2024.2 99d377c1725b 9 hours ago 324MB 2025-06-19 10:53:53.735597 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 c9525efff366 9 hours ago 324MB 2025-06-19 10:53:53.735607 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 4a0c9d8671fc 9 hours ago 1.41GB 2025-06-19 10:53:53.735618 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 a9b50070a7ce 9 hours ago 1.41GB 2025-06-19 10:53:53.735628 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5ef54d43211d 9 hours ago 1.31GB 2025-06-19 10:53:53.735638 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 506b859610dc 9 hours ago 1.2GB 2025-06-19 10:53:53.735648 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 21e37512da66 9 hours ago 1.1GB 2025-06-19 10:53:53.735658 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 7c4aa4fc28f6 9 hours ago 1.12GB 2025-06-19 10:53:53.735669 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 a5bce9a2abbb 9 hours ago 1.1GB 2025-06-19 10:53:53.735679 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 146b987c62d9 9 hours ago 1.1GB 2025-06-19 10:53:53.735691 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 19c69bf11a7c 9 hours ago 1.12GB 2025-06-19 10:53:53.735703 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 7f11c4464820 9 hours ago 1.15GB 2025-06-19 10:53:53.735715 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 e9b0ecda36f6 9 hours ago 1.05GB 2025-06-19 10:53:53.735727 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 110d6dcc33ba 9 hours ago 1.05GB 2025-06-19 10:53:53.735739 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 aae74116f1d4 9 hours ago 1.05GB 2025-06-19 10:53:53.735751 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 c48b45c02968 9 hours ago 1.06GB 2025-06-19 10:53:53.735789 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 68bdaf277975 9 hours ago 1.06GB 2025-06-19 10:53:53.735803 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 41f20514d1dc 9 hours ago 1.05GB 2025-06-19 10:53:53.735815 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 7a683b4a54db 9 hours ago 1.42GB 2025-06-19 10:53:53.735826 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 b0ac03972cd3 9 hours ago 1.29GB 2025-06-19 10:53:53.735838 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 27cc063ad1ee 9 hours ago 1.29GB 2025-06-19 10:53:53.735850 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 3961faeea082 9 hours ago 1.29GB 2025-06-19 10:53:53.735862 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 04d917498625 9 hours ago 1.06GB 2025-06-19 10:53:53.735874 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 195e35cf940a 9 hours ago 1.06GB 2025-06-19 10:53:53.735886 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 16b183490e42 9 hours ago 1.06GB 2025-06-19 10:53:53.735898 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 91517ab27a46 9 hours ago 1.11GB 2025-06-19 10:53:53.735910 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 25034b238f44 9 hours ago 1.11GB 2025-06-19 10:53:53.735923 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f9a01fc47d6f 9 hours ago 1.13GB 2025-06-19 10:53:53.735941 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 cc72cdcb3ea1 9 hours ago 1.24GB 2025-06-19 10:53:53.735954 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c69e2336744f 9 hours ago 1.04GB 2025-06-19 10:53:53.735966 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 41bfb033bd2c 9 hours ago 946MB 2025-06-19 10:53:53.735977 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 a86a91d5e3e6 9 hours ago 947MB 2025-06-19 10:53:53.735987 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 342b8ebe5f79 9 hours ago 946MB 2025-06-19 10:53:53.735998 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 7261603108a3 9 hours ago 947MB 2025-06-19 10:53:53.947251 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-19 10:53:53.947605 | orchestrator | ++ semver latest 5.0.0 2025-06-19 10:53:54.004068 | orchestrator | 2025-06-19 10:53:54.004153 | orchestrator | ## Containers @ testbed-node-2 2025-06-19 10:53:54.004168 | orchestrator | 2025-06-19 10:53:54.004231 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-19 10:53:54.004252 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-19 10:53:54.004270 | orchestrator | + echo 2025-06-19 10:53:54.004285 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-19 10:53:54.004297 | orchestrator | + echo 2025-06-19 10:53:54.004308 | orchestrator | + osism container testbed-node-2 ps 2025-06-19 10:53:56.103296 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-19 10:53:56.103421 | orchestrator | 534db743ad72 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-19 10:53:56.103439 | orchestrator | a89db4e04b9b registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-19 10:53:56.103470 | orchestrator | c1d611db594d registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-19 10:53:56.103481 | orchestrator | 381624c96e13 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-19 10:53:56.103528 | orchestrator | 489e3d2da9b3 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_api 2025-06-19 10:53:56.103541 | orchestrator | 182c8b495ce2 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-19 10:53:56.103552 | orchestrator | e3483dfad87e registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-19 10:53:56.103562 | orchestrator | 70a52f8846c2 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-19 10:53:56.103573 | orchestrator | ae99d76d1364 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-19 10:53:56.103584 | orchestrator | d76a1fe069a7 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-19 10:53:56.103594 | orchestrator | c90e0f48e46a registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-19 10:53:56.103605 | orchestrator | 5e8452da04c0 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-19 10:53:56.103616 | orchestrator | 699a09dfabed registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-19 10:53:56.103626 | orchestrator | 60a6f47f8ab2 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_central 2025-06-19 10:53:56.103637 | orchestrator | 251488a146dc registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-19 10:53:56.103647 | orchestrator | e22950bb400c registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-19 10:53:56.103658 | orchestrator | 4ccde57c8c42 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-19 10:53:56.103668 | orchestrator | ad87cccef46e registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-19 10:53:56.103679 | orchestrator | 779308bea30d registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-19 10:53:56.103690 | orchestrator | c7acaa96838d registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_keystone_listener 2025-06-19 10:53:56.103700 | orchestrator | 7eb764813c37 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) nova_api 2025-06-19 10:53:56.103728 | orchestrator | a4c04a38d96f registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-19 10:53:56.103740 | orchestrator | 1e79bf4da1bb registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_api 2025-06-19 10:53:56.103751 | orchestrator | cb7491988e64 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) glance_api 2025-06-19 10:53:56.103769 | orchestrator | 3cf7792c7de3 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_scheduler 2025-06-19 10:53:56.103788 | orchestrator | e170f455f622 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) cinder_api 2025-06-19 10:53:56.103799 | orchestrator | 34f7973333f1 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-19 10:53:56.103812 | orchestrator | 0e856bbe1416 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-19 10:53:56.103824 | orchestrator | 1d86441caf46 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-19 10:53:56.103836 | orchestrator | 6dc4439bb2a7 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-19 10:53:56.103857 | orchestrator | f4ce93114718 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_node_exporter 2025-06-19 10:53:56.103878 | orchestrator | 9a391f01b318 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-06-19 10:53:56.103898 | orchestrator | 684dcffa9960 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone 2025-06-19 10:53:56.103920 | orchestrator | 7ad6f3fd4690 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) horizon 2025-06-19 10:53:56.103942 | orchestrator | c4bd32678d14 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) keystone_fernet 2025-06-19 10:53:56.103965 | orchestrator | 9f026932fcf7 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone_ssh 2025-06-19 10:53:56.103985 | orchestrator | c60e1d5d13a2 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) opensearch_dashboards 2025-06-19 10:53:56.103998 | orchestrator | 4485bb554d51 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 19 minutes ago Up 19 minutes (healthy) mariadb 2025-06-19 10:53:56.104010 | orchestrator | 184cadacafae registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch 2025-06-19 10:53:56.104022 | orchestrator | 66385ba065bc registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes keepalived 2025-06-19 10:53:56.104035 | orchestrator | 17e1428e2754 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) proxysql 2025-06-19 10:53:56.104047 | orchestrator | 8438bf154372 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 22 minutes ago Up 22 minutes ceph-crash-testbed-node-2 2025-06-19 10:53:56.104059 | orchestrator | 9ff9a3ec86d0 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) haproxy 2025-06-19 10:53:56.104079 | orchestrator | cd70c4b05d47 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 25 minutes ago Up 24 minutes ovn_northd 2025-06-19 10:53:56.104100 | orchestrator | 815f608702f7 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_sb_db 2025-06-19 10:53:56.104113 | orchestrator | 5db772067b65 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes ovn_nb_db 2025-06-19 10:53:56.104125 | orchestrator | 76e293525032 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 25 minutes ago Up 25 minutes (healthy) rabbitmq 2025-06-19 10:53:56.104137 | orchestrator | 7bcdabc6d38f registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_controller 2025-06-19 10:53:56.104150 | orchestrator | e0631e3e8f22 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 26 minutes ago Up 26 minutes ceph-mon-testbed-node-2 2025-06-19 10:53:56.104162 | orchestrator | 9aa71668f6fc registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_vswitchd 2025-06-19 10:53:56.104213 | orchestrator | 2f1e283bd972 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) openvswitch_db 2025-06-19 10:53:56.104227 | orchestrator | a9a6523ea98e registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis_sentinel 2025-06-19 10:53:56.104238 | orchestrator | 6c1eb0e142f5 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) redis 2025-06-19 10:53:56.104249 | orchestrator | 2e71ea9dab0b registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) memcached 2025-06-19 10:53:56.104259 | orchestrator | 04381566fa00 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes cron 2025-06-19 10:53:56.104271 | orchestrator | 8a048dcd9fb9 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes kolla_toolbox 2025-06-19 10:53:56.104281 | orchestrator | de29feab8972 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes fluentd 2025-06-19 10:53:56.347429 | orchestrator | 2025-06-19 10:53:56.347544 | orchestrator | ## Images @ testbed-node-2 2025-06-19 10:53:56.347566 | orchestrator | 2025-06-19 10:53:56.347584 | orchestrator | + echo 2025-06-19 10:53:56.347603 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-19 10:53:56.347622 | orchestrator | + echo 2025-06-19 10:53:56.347641 | orchestrator | + osism container testbed-node-2 images 2025-06-19 10:53:58.386875 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-19 10:53:58.386981 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 7f16f2edcdbe 8 hours ago 1.27GB 2025-06-19 10:53:58.386996 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a2be729d3919 9 hours ago 417MB 2025-06-19 10:53:58.387008 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 9bd39ba00847 9 hours ago 628MB 2025-06-19 10:53:58.387019 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 57408b87a880 9 hours ago 1.59GB 2025-06-19 10:53:58.387050 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 ce100d02455a 9 hours ago 1.55GB 2025-06-19 10:53:58.387066 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 a6059dc263fb 9 hours ago 329MB 2025-06-19 10:53:58.387112 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 15167c71a27a 9 hours ago 1.01GB 2025-06-19 10:53:58.387155 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 69bb822e4c46 9 hours ago 326MB 2025-06-19 10:53:58.387167 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 c79b10660dab 9 hours ago 375MB 2025-06-19 10:53:58.387210 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 b5eb771b609e 9 hours ago 746MB 2025-06-19 10:53:58.387224 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 986ab28569d2 9 hours ago 318MB 2025-06-19 10:53:58.387234 | orchestrator | registry.osism.tech/kolla/cron 2024.2 2ef0eaede3d9 9 hours ago 318MB 2025-06-19 10:53:58.387245 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 2ba5be517edf 9 hours ago 590MB 2025-06-19 10:53:58.387255 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 004b3542e240 9 hours ago 353MB 2025-06-19 10:53:58.387266 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 b153d384e2e9 9 hours ago 351MB 2025-06-19 10:53:58.387276 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 b9003cc00d54 9 hours ago 358MB 2025-06-19 10:53:58.387286 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 36b2f03ef43f 9 hours ago 410MB 2025-06-19 10:53:58.387297 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 98e28e256541 9 hours ago 344MB 2025-06-19 10:53:58.387307 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 382fa572aeb0 9 hours ago 1.21GB 2025-06-19 10:53:58.387332 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 dcc2b16f119a 9 hours ago 361MB 2025-06-19 10:53:58.387349 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 c22fdcceadc5 9 hours ago 361MB 2025-06-19 10:53:58.387360 | orchestrator | registry.osism.tech/kolla/redis 2024.2 99d377c1725b 9 hours ago 324MB 2025-06-19 10:53:58.387370 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 c9525efff366 9 hours ago 324MB 2025-06-19 10:53:58.387381 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 4a0c9d8671fc 9 hours ago 1.41GB 2025-06-19 10:53:58.387391 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 a9b50070a7ce 9 hours ago 1.41GB 2025-06-19 10:53:58.387404 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 5ef54d43211d 9 hours ago 1.31GB 2025-06-19 10:53:58.387415 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 506b859610dc 9 hours ago 1.2GB 2025-06-19 10:53:58.387428 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 21e37512da66 9 hours ago 1.1GB 2025-06-19 10:53:58.387440 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 7c4aa4fc28f6 9 hours ago 1.12GB 2025-06-19 10:53:58.387451 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 a5bce9a2abbb 9 hours ago 1.1GB 2025-06-19 10:53:58.387475 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 146b987c62d9 9 hours ago 1.1GB 2025-06-19 10:53:58.387499 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 19c69bf11a7c 9 hours ago 1.12GB 2025-06-19 10:53:58.387511 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 7f11c4464820 9 hours ago 1.15GB 2025-06-19 10:53:58.387523 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 e9b0ecda36f6 9 hours ago 1.05GB 2025-06-19 10:53:58.387536 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 110d6dcc33ba 9 hours ago 1.05GB 2025-06-19 10:53:58.387547 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 aae74116f1d4 9 hours ago 1.05GB 2025-06-19 10:53:58.387568 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 c48b45c02968 9 hours ago 1.06GB 2025-06-19 10:53:58.387602 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 68bdaf277975 9 hours ago 1.06GB 2025-06-19 10:53:58.387615 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 41f20514d1dc 9 hours ago 1.05GB 2025-06-19 10:53:58.387627 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 7a683b4a54db 9 hours ago 1.42GB 2025-06-19 10:53:58.387639 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 b0ac03972cd3 9 hours ago 1.29GB 2025-06-19 10:53:58.387651 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 27cc063ad1ee 9 hours ago 1.29GB 2025-06-19 10:53:58.387664 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 3961faeea082 9 hours ago 1.29GB 2025-06-19 10:53:58.387677 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 04d917498625 9 hours ago 1.06GB 2025-06-19 10:53:58.387689 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 195e35cf940a 9 hours ago 1.06GB 2025-06-19 10:53:58.387701 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 16b183490e42 9 hours ago 1.06GB 2025-06-19 10:53:58.387713 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 91517ab27a46 9 hours ago 1.11GB 2025-06-19 10:53:58.387725 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 25034b238f44 9 hours ago 1.11GB 2025-06-19 10:53:58.387738 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 f9a01fc47d6f 9 hours ago 1.13GB 2025-06-19 10:53:58.387750 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 cc72cdcb3ea1 9 hours ago 1.24GB 2025-06-19 10:53:58.387761 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 c69e2336744f 9 hours ago 1.04GB 2025-06-19 10:53:58.387772 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 41bfb033bd2c 9 hours ago 946MB 2025-06-19 10:53:58.387782 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 a86a91d5e3e6 9 hours ago 947MB 2025-06-19 10:53:58.387793 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 342b8ebe5f79 9 hours ago 946MB 2025-06-19 10:53:58.387804 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 7261603108a3 9 hours ago 947MB 2025-06-19 10:53:58.615746 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-19 10:53:58.622212 | orchestrator | + set -e 2025-06-19 10:53:58.622249 | orchestrator | + source /opt/manager-vars.sh 2025-06-19 10:53:58.623395 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-19 10:53:58.623417 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-19 10:53:58.623429 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-19 10:53:58.623440 | orchestrator | ++ CEPH_VERSION=reef 2025-06-19 10:53:58.623452 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-19 10:53:58.623464 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-19 10:53:58.623475 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-19 10:53:58.623486 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-19 10:53:58.623498 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-19 10:53:58.623509 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-19 10:53:58.623520 | orchestrator | ++ export ARA=false 2025-06-19 10:53:58.623531 | orchestrator | ++ ARA=false 2025-06-19 10:53:58.623542 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-19 10:53:58.623553 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-19 10:53:58.623569 | orchestrator | ++ export TEMPEST=false 2025-06-19 10:53:58.623580 | orchestrator | ++ TEMPEST=false 2025-06-19 10:53:58.623592 | orchestrator | ++ export IS_ZUUL=true 2025-06-19 10:53:58.623603 | orchestrator | ++ IS_ZUUL=true 2025-06-19 10:53:58.623615 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 10:53:58.623626 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 10:53:58.623637 | orchestrator | ++ export EXTERNAL_API=false 2025-06-19 10:53:58.623648 | orchestrator | ++ EXTERNAL_API=false 2025-06-19 10:53:58.623659 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-19 10:53:58.623693 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-19 10:53:58.623705 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-19 10:53:58.623716 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-19 10:53:58.623727 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-19 10:53:58.623738 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-19 10:53:58.623749 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-19 10:53:58.623761 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-19 10:53:58.635270 | orchestrator | + set -e 2025-06-19 10:53:58.635296 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-19 10:53:58.635307 | orchestrator | ++ export INTERACTIVE=false 2025-06-19 10:53:58.635318 | orchestrator | ++ INTERACTIVE=false 2025-06-19 10:53:58.635328 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-19 10:53:58.635339 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-19 10:53:58.635614 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-19 10:53:58.636823 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-19 10:53:58.641076 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-19 10:53:58.641102 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-19 10:53:58.641113 | orchestrator | 2025-06-19 10:53:58.641125 | orchestrator | # Ceph status 2025-06-19 10:53:58.641136 | orchestrator | 2025-06-19 10:53:58.641147 | orchestrator | + echo 2025-06-19 10:53:58.641157 | orchestrator | + echo '# Ceph status' 2025-06-19 10:53:58.641168 | orchestrator | + echo 2025-06-19 10:53:58.641209 | orchestrator | + ceph -s 2025-06-19 10:53:59.184913 | orchestrator | cluster: 2025-06-19 10:53:59.185029 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-19 10:53:59.185044 | orchestrator | health: HEALTH_OK 2025-06-19 10:53:59.185056 | orchestrator | 2025-06-19 10:53:59.185067 | orchestrator | services: 2025-06-19 10:53:59.185078 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 26m) 2025-06-19 10:53:59.185091 | orchestrator | mgr: testbed-node-1(active, since 14m), standbys: testbed-node-2, testbed-node-0 2025-06-19 10:53:59.185103 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-19 10:53:59.185136 | orchestrator | osd: 6 osds: 6 up (since 23m), 6 in (since 23m) 2025-06-19 10:53:59.185148 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-19 10:53:59.185159 | orchestrator | 2025-06-19 10:53:59.185169 | orchestrator | data: 2025-06-19 10:53:59.185217 | orchestrator | volumes: 1/1 healthy 2025-06-19 10:53:59.185237 | orchestrator | pools: 14 pools, 401 pgs 2025-06-19 10:53:59.185249 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-19 10:53:59.185260 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-19 10:53:59.185271 | orchestrator | pgs: 401 active+clean 2025-06-19 10:53:59.185282 | orchestrator | 2025-06-19 10:53:59.227299 | orchestrator | 2025-06-19 10:53:59.227371 | orchestrator | # Ceph versions 2025-06-19 10:53:59.227383 | orchestrator | 2025-06-19 10:53:59.227395 | orchestrator | + echo 2025-06-19 10:53:59.227407 | orchestrator | + echo '# Ceph versions' 2025-06-19 10:53:59.227420 | orchestrator | + echo 2025-06-19 10:53:59.227431 | orchestrator | + ceph versions 2025-06-19 10:53:59.819652 | orchestrator | { 2025-06-19 10:53:59.819753 | orchestrator | "mon": { 2025-06-19 10:53:59.819769 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-19 10:53:59.819782 | orchestrator | }, 2025-06-19 10:53:59.819793 | orchestrator | "mgr": { 2025-06-19 10:53:59.819805 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-19 10:53:59.819815 | orchestrator | }, 2025-06-19 10:53:59.819826 | orchestrator | "osd": { 2025-06-19 10:53:59.819837 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-19 10:53:59.819848 | orchestrator | }, 2025-06-19 10:53:59.819859 | orchestrator | "mds": { 2025-06-19 10:53:59.819870 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-19 10:53:59.819881 | orchestrator | }, 2025-06-19 10:53:59.819892 | orchestrator | "rgw": { 2025-06-19 10:53:59.819902 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-19 10:53:59.819913 | orchestrator | }, 2025-06-19 10:53:59.819924 | orchestrator | "overall": { 2025-06-19 10:53:59.819935 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-19 10:53:59.819946 | orchestrator | } 2025-06-19 10:53:59.819958 | orchestrator | } 2025-06-19 10:53:59.857153 | orchestrator | 2025-06-19 10:53:59.857310 | orchestrator | # Ceph OSD tree 2025-06-19 10:53:59.857327 | orchestrator | 2025-06-19 10:53:59.857339 | orchestrator | + echo 2025-06-19 10:53:59.857350 | orchestrator | + echo '# Ceph OSD tree' 2025-06-19 10:53:59.857361 | orchestrator | + echo 2025-06-19 10:53:59.857372 | orchestrator | + ceph osd df tree 2025-06-19 10:54:00.384453 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-19 10:54:00.384548 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-19 10:54:00.384559 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-19 10:54:00.384567 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 6.09 1.03 190 up osd.1 2025-06-19 10:54:00.384575 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 70 MiB 19 GiB 5.75 0.97 202 up osd.4 2025-06-19 10:54:00.384582 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-19 10:54:00.384590 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 904 MiB 835 MiB 1 KiB 70 MiB 19 GiB 4.42 0.75 189 up osd.0 2025-06-19 10:54:00.384597 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 74 MiB 18 GiB 7.41 1.25 201 up osd.3 2025-06-19 10:54:00.384604 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-19 10:54:00.384612 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.14 1.04 191 up osd.2 2025-06-19 10:54:00.384619 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 74 MiB 19 GiB 5.69 0.96 197 up osd.5 2025-06-19 10:54:00.384626 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-19 10:54:00.384634 | orchestrator | MIN/MAX VAR: 0.75/1.25 STDDEV: 0.88 2025-06-19 10:54:00.429305 | orchestrator | 2025-06-19 10:54:00.429368 | orchestrator | # Ceph monitor status 2025-06-19 10:54:00.429381 | orchestrator | 2025-06-19 10:54:00.429392 | orchestrator | + echo 2025-06-19 10:54:00.429421 | orchestrator | + echo '# Ceph monitor status' 2025-06-19 10:54:00.429432 | orchestrator | + echo 2025-06-19 10:54:00.429443 | orchestrator | + ceph mon stat 2025-06-19 10:54:01.005593 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 4, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-19 10:54:01.059111 | orchestrator | 2025-06-19 10:54:01.059261 | orchestrator | # Ceph quorum status 2025-06-19 10:54:01.059280 | orchestrator | 2025-06-19 10:54:01.059293 | orchestrator | + echo 2025-06-19 10:54:01.059304 | orchestrator | + echo '# Ceph quorum status' 2025-06-19 10:54:01.059315 | orchestrator | + echo 2025-06-19 10:54:01.059665 | orchestrator | + ceph quorum_status 2025-06-19 10:54:01.059769 | orchestrator | + jq 2025-06-19 10:54:01.684100 | orchestrator | { 2025-06-19 10:54:01.684248 | orchestrator | "election_epoch": 4, 2025-06-19 10:54:01.684265 | orchestrator | "quorum": [ 2025-06-19 10:54:01.684276 | orchestrator | 0, 2025-06-19 10:54:01.684287 | orchestrator | 1, 2025-06-19 10:54:01.684298 | orchestrator | 2 2025-06-19 10:54:01.684309 | orchestrator | ], 2025-06-19 10:54:01.684319 | orchestrator | "quorum_names": [ 2025-06-19 10:54:01.684330 | orchestrator | "testbed-node-0", 2025-06-19 10:54:01.684341 | orchestrator | "testbed-node-1", 2025-06-19 10:54:01.684351 | orchestrator | "testbed-node-2" 2025-06-19 10:54:01.684362 | orchestrator | ], 2025-06-19 10:54:01.684373 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-19 10:54:01.684385 | orchestrator | "quorum_age": 1603, 2025-06-19 10:54:01.684395 | orchestrator | "features": { 2025-06-19 10:54:01.684406 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-19 10:54:01.684417 | orchestrator | "quorum_mon": [ 2025-06-19 10:54:01.684428 | orchestrator | "kraken", 2025-06-19 10:54:01.684438 | orchestrator | "luminous", 2025-06-19 10:54:01.684473 | orchestrator | "mimic", 2025-06-19 10:54:01.684484 | orchestrator | "osdmap-prune", 2025-06-19 10:54:01.684495 | orchestrator | "nautilus", 2025-06-19 10:54:01.684506 | orchestrator | "octopus", 2025-06-19 10:54:01.684516 | orchestrator | "pacific", 2025-06-19 10:54:01.684527 | orchestrator | "elector-pinging", 2025-06-19 10:54:01.684538 | orchestrator | "quincy", 2025-06-19 10:54:01.684549 | orchestrator | "reef" 2025-06-19 10:54:01.684560 | orchestrator | ] 2025-06-19 10:54:01.684570 | orchestrator | }, 2025-06-19 10:54:01.684581 | orchestrator | "monmap": { 2025-06-19 10:54:01.684592 | orchestrator | "epoch": 1, 2025-06-19 10:54:01.684602 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-19 10:54:01.684614 | orchestrator | "modified": "2025-06-19T10:27:06.816163Z", 2025-06-19 10:54:01.684625 | orchestrator | "created": "2025-06-19T10:27:06.816163Z", 2025-06-19 10:54:01.684636 | orchestrator | "min_mon_release": 18, 2025-06-19 10:54:01.684648 | orchestrator | "min_mon_release_name": "reef", 2025-06-19 10:54:01.684660 | orchestrator | "election_strategy": 1, 2025-06-19 10:54:01.684672 | orchestrator | "disallowed_leaders: ": "", 2025-06-19 10:54:01.684837 | orchestrator | "stretch_mode": false, 2025-06-19 10:54:01.684851 | orchestrator | "tiebreaker_mon": "", 2025-06-19 10:54:01.684864 | orchestrator | "removed_ranks: ": "", 2025-06-19 10:54:01.684876 | orchestrator | "features": { 2025-06-19 10:54:01.684888 | orchestrator | "persistent": [ 2025-06-19 10:54:01.684900 | orchestrator | "kraken", 2025-06-19 10:54:01.684912 | orchestrator | "luminous", 2025-06-19 10:54:01.684925 | orchestrator | "mimic", 2025-06-19 10:54:01.684937 | orchestrator | "osdmap-prune", 2025-06-19 10:54:01.684950 | orchestrator | "nautilus", 2025-06-19 10:54:01.684962 | orchestrator | "octopus", 2025-06-19 10:54:01.684974 | orchestrator | "pacific", 2025-06-19 10:54:01.684986 | orchestrator | "elector-pinging", 2025-06-19 10:54:01.684999 | orchestrator | "quincy", 2025-06-19 10:54:01.685009 | orchestrator | "reef" 2025-06-19 10:54:01.685020 | orchestrator | ], 2025-06-19 10:54:01.685031 | orchestrator | "optional": [] 2025-06-19 10:54:01.685041 | orchestrator | }, 2025-06-19 10:54:01.685053 | orchestrator | "mons": [ 2025-06-19 10:54:01.685064 | orchestrator | { 2025-06-19 10:54:01.685075 | orchestrator | "rank": 0, 2025-06-19 10:54:01.685086 | orchestrator | "name": "testbed-node-0", 2025-06-19 10:54:01.685096 | orchestrator | "public_addrs": { 2025-06-19 10:54:01.685107 | orchestrator | "addrvec": [ 2025-06-19 10:54:01.685118 | orchestrator | { 2025-06-19 10:54:01.685128 | orchestrator | "type": "v2", 2025-06-19 10:54:01.685139 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-19 10:54:01.685150 | orchestrator | "nonce": 0 2025-06-19 10:54:01.685160 | orchestrator | }, 2025-06-19 10:54:01.685190 | orchestrator | { 2025-06-19 10:54:01.685202 | orchestrator | "type": "v1", 2025-06-19 10:54:01.685213 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-19 10:54:01.685223 | orchestrator | "nonce": 0 2025-06-19 10:54:01.685234 | orchestrator | } 2025-06-19 10:54:01.685245 | orchestrator | ] 2025-06-19 10:54:01.685255 | orchestrator | }, 2025-06-19 10:54:01.685266 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-19 10:54:01.685277 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-19 10:54:01.685287 | orchestrator | "priority": 0, 2025-06-19 10:54:01.685298 | orchestrator | "weight": 0, 2025-06-19 10:54:01.685309 | orchestrator | "crush_location": "{}" 2025-06-19 10:54:01.685319 | orchestrator | }, 2025-06-19 10:54:01.685330 | orchestrator | { 2025-06-19 10:54:01.685340 | orchestrator | "rank": 1, 2025-06-19 10:54:01.685351 | orchestrator | "name": "testbed-node-1", 2025-06-19 10:54:01.685361 | orchestrator | "public_addrs": { 2025-06-19 10:54:01.685372 | orchestrator | "addrvec": [ 2025-06-19 10:54:01.685382 | orchestrator | { 2025-06-19 10:54:01.685393 | orchestrator | "type": "v2", 2025-06-19 10:54:01.685403 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-19 10:54:01.685414 | orchestrator | "nonce": 0 2025-06-19 10:54:01.685425 | orchestrator | }, 2025-06-19 10:54:01.685435 | orchestrator | { 2025-06-19 10:54:01.685446 | orchestrator | "type": "v1", 2025-06-19 10:54:01.685456 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-19 10:54:01.685467 | orchestrator | "nonce": 0 2025-06-19 10:54:01.685478 | orchestrator | } 2025-06-19 10:54:01.685488 | orchestrator | ] 2025-06-19 10:54:01.685499 | orchestrator | }, 2025-06-19 10:54:01.685510 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-19 10:54:01.685537 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-19 10:54:01.685547 | orchestrator | "priority": 0, 2025-06-19 10:54:01.685558 | orchestrator | "weight": 0, 2025-06-19 10:54:01.685568 | orchestrator | "crush_location": "{}" 2025-06-19 10:54:01.685579 | orchestrator | }, 2025-06-19 10:54:01.685589 | orchestrator | { 2025-06-19 10:54:01.685600 | orchestrator | "rank": 2, 2025-06-19 10:54:01.685611 | orchestrator | "name": "testbed-node-2", 2025-06-19 10:54:01.685621 | orchestrator | "public_addrs": { 2025-06-19 10:54:01.685632 | orchestrator | "addrvec": [ 2025-06-19 10:54:01.685642 | orchestrator | { 2025-06-19 10:54:01.685653 | orchestrator | "type": "v2", 2025-06-19 10:54:01.685663 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-19 10:54:01.685674 | orchestrator | "nonce": 0 2025-06-19 10:54:01.685685 | orchestrator | }, 2025-06-19 10:54:01.685696 | orchestrator | { 2025-06-19 10:54:01.685706 | orchestrator | "type": "v1", 2025-06-19 10:54:01.685717 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-19 10:54:01.685727 | orchestrator | "nonce": 0 2025-06-19 10:54:01.685738 | orchestrator | } 2025-06-19 10:54:01.685748 | orchestrator | ] 2025-06-19 10:54:01.685759 | orchestrator | }, 2025-06-19 10:54:01.685769 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-19 10:54:01.685780 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-19 10:54:01.685791 | orchestrator | "priority": 0, 2025-06-19 10:54:01.685801 | orchestrator | "weight": 0, 2025-06-19 10:54:01.685812 | orchestrator | "crush_location": "{}" 2025-06-19 10:54:01.685822 | orchestrator | } 2025-06-19 10:54:01.685833 | orchestrator | ] 2025-06-19 10:54:01.685843 | orchestrator | } 2025-06-19 10:54:01.685854 | orchestrator | } 2025-06-19 10:54:01.685878 | orchestrator | 2025-06-19 10:54:01.685890 | orchestrator | # Ceph free space status 2025-06-19 10:54:01.685900 | orchestrator | 2025-06-19 10:54:01.685911 | orchestrator | + echo 2025-06-19 10:54:01.685922 | orchestrator | + echo '# Ceph free space status' 2025-06-19 10:54:01.685932 | orchestrator | + echo 2025-06-19 10:54:01.685943 | orchestrator | + ceph df 2025-06-19 10:54:02.261421 | orchestrator | --- RAW STORAGE --- 2025-06-19 10:54:02.261521 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-19 10:54:02.261547 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-19 10:54:02.261559 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-19 10:54:02.261571 | orchestrator | 2025-06-19 10:54:02.261582 | orchestrator | --- POOLS --- 2025-06-19 10:54:02.261594 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-19 10:54:02.261605 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-19 10:54:02.261616 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-19 10:54:02.261627 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-19 10:54:02.261637 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-19 10:54:02.261647 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-19 10:54:02.261658 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-19 10:54:02.261669 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-19 10:54:02.261679 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-19 10:54:02.261690 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-06-19 10:54:02.261700 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-19 10:54:02.261711 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-19 10:54:02.261721 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.96 35 GiB 2025-06-19 10:54:02.261732 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-19 10:54:02.261742 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-19 10:54:02.302999 | orchestrator | ++ semver latest 5.0.0 2025-06-19 10:54:02.356755 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-19 10:54:02.356795 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-19 10:54:02.356806 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-19 10:54:02.356843 | orchestrator | + osism apply facts 2025-06-19 10:54:03.982640 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:54:03.982739 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:54:03.982754 | orchestrator | Registering Redlock._release_script 2025-06-19 10:54:04.039576 | orchestrator | 2025-06-19 10:54:04 | INFO  | Task c8360db9-bde8-49c2-9ec6-106ccbd82b6f (facts) was prepared for execution. 2025-06-19 10:54:04.039671 | orchestrator | 2025-06-19 10:54:04 | INFO  | It takes a moment until task c8360db9-bde8-49c2-9ec6-106ccbd82b6f (facts) has been started and output is visible here. 2025-06-19 10:54:16.698138 | orchestrator | 2025-06-19 10:54:16.698298 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-19 10:54:16.698316 | orchestrator | 2025-06-19 10:54:16.698328 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-19 10:54:16.698340 | orchestrator | Thursday 19 June 2025 10:54:08 +0000 (0:00:00.274) 0:00:00.274 ********* 2025-06-19 10:54:16.698351 | orchestrator | ok: [testbed-manager] 2025-06-19 10:54:16.698363 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:16.698374 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:54:16.698385 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:54:16.698395 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:54:16.698406 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:54:16.698416 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:54:16.698427 | orchestrator | 2025-06-19 10:54:16.698438 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-19 10:54:16.698449 | orchestrator | Thursday 19 June 2025 10:54:09 +0000 (0:00:01.355) 0:00:01.630 ********* 2025-06-19 10:54:16.698459 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:54:16.698471 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:16.698481 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:54:16.698492 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:54:16.698502 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:54:16.698513 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:54:16.698523 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:54:16.698534 | orchestrator | 2025-06-19 10:54:16.698545 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-19 10:54:16.698555 | orchestrator | 2025-06-19 10:54:16.698566 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-19 10:54:16.698578 | orchestrator | Thursday 19 June 2025 10:54:10 +0000 (0:00:01.180) 0:00:02.810 ********* 2025-06-19 10:54:16.698589 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:54:16.698599 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:54:16.698610 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:16.698621 | orchestrator | ok: [testbed-manager] 2025-06-19 10:54:16.698632 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:54:16.698644 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:54:16.698656 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:54:16.698668 | orchestrator | 2025-06-19 10:54:16.698680 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-19 10:54:16.698692 | orchestrator | 2025-06-19 10:54:16.698704 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-19 10:54:16.698715 | orchestrator | Thursday 19 June 2025 10:54:15 +0000 (0:00:05.092) 0:00:07.903 ********* 2025-06-19 10:54:16.698727 | orchestrator | skipping: [testbed-manager] 2025-06-19 10:54:16.698739 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:16.698751 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:54:16.698762 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:54:16.698774 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:54:16.698786 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:54:16.698798 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:54:16.698810 | orchestrator | 2025-06-19 10:54:16.698821 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:54:16.698834 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:54:16.698875 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:54:16.698888 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:54:16.698901 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:54:16.698913 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:54:16.698939 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:54:16.698951 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:54:16.698963 | orchestrator | 2025-06-19 10:54:16.698976 | orchestrator | 2025-06-19 10:54:16.698989 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:54:16.699000 | orchestrator | Thursday 19 June 2025 10:54:16 +0000 (0:00:00.562) 0:00:08.466 ********* 2025-06-19 10:54:16.699011 | orchestrator | =============================================================================== 2025-06-19 10:54:16.699022 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.09s 2025-06-19 10:54:16.699033 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.36s 2025-06-19 10:54:16.699044 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.18s 2025-06-19 10:54:16.699054 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-06-19 10:54:16.923665 | orchestrator | + osism validate ceph-mons 2025-06-19 10:54:18.596100 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:54:18.596236 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:54:18.596253 | orchestrator | Registering Redlock._release_script 2025-06-19 10:54:38.467978 | orchestrator | 2025-06-19 10:54:38.468060 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-19 10:54:38.468067 | orchestrator | 2025-06-19 10:54:38.468072 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-19 10:54:38.468076 | orchestrator | Thursday 19 June 2025 10:54:22 +0000 (0:00:00.441) 0:00:00.441 ********* 2025-06-19 10:54:38.468081 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:54:38.468085 | orchestrator | 2025-06-19 10:54:38.468089 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-19 10:54:38.468093 | orchestrator | Thursday 19 June 2025 10:54:23 +0000 (0:00:00.654) 0:00:01.095 ********* 2025-06-19 10:54:38.468097 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:54:38.468101 | orchestrator | 2025-06-19 10:54:38.468105 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-19 10:54:38.468108 | orchestrator | Thursday 19 June 2025 10:54:24 +0000 (0:00:00.830) 0:00:01.926 ********* 2025-06-19 10:54:38.468112 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468117 | orchestrator | 2025-06-19 10:54:38.468121 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-19 10:54:38.468124 | orchestrator | Thursday 19 June 2025 10:54:24 +0000 (0:00:00.243) 0:00:02.169 ********* 2025-06-19 10:54:38.468128 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468132 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:54:38.468135 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:54:38.468139 | orchestrator | 2025-06-19 10:54:38.468143 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-19 10:54:38.468147 | orchestrator | Thursday 19 June 2025 10:54:24 +0000 (0:00:00.301) 0:00:02.471 ********* 2025-06-19 10:54:38.468150 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:54:38.468196 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:54:38.468200 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468204 | orchestrator | 2025-06-19 10:54:38.468207 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-19 10:54:38.468211 | orchestrator | Thursday 19 June 2025 10:54:25 +0000 (0:00:00.999) 0:00:03.471 ********* 2025-06-19 10:54:38.468215 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468219 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:54:38.468222 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:54:38.468226 | orchestrator | 2025-06-19 10:54:38.468230 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-19 10:54:38.468233 | orchestrator | Thursday 19 June 2025 10:54:26 +0000 (0:00:00.290) 0:00:03.762 ********* 2025-06-19 10:54:38.468237 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468241 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:54:38.468244 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:54:38.468248 | orchestrator | 2025-06-19 10:54:38.468252 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-19 10:54:38.468255 | orchestrator | Thursday 19 June 2025 10:54:26 +0000 (0:00:00.505) 0:00:04.267 ********* 2025-06-19 10:54:38.468259 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468263 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:54:38.468267 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:54:38.468270 | orchestrator | 2025-06-19 10:54:38.468274 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-19 10:54:38.468278 | orchestrator | Thursday 19 June 2025 10:54:26 +0000 (0:00:00.289) 0:00:04.557 ********* 2025-06-19 10:54:38.468281 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468285 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:54:38.468289 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:54:38.468292 | orchestrator | 2025-06-19 10:54:38.468296 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-19 10:54:38.468300 | orchestrator | Thursday 19 June 2025 10:54:27 +0000 (0:00:00.311) 0:00:04.869 ********* 2025-06-19 10:54:38.468303 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468307 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:54:38.468311 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:54:38.468314 | orchestrator | 2025-06-19 10:54:38.468318 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-19 10:54:38.468322 | orchestrator | Thursday 19 June 2025 10:54:27 +0000 (0:00:00.360) 0:00:05.229 ********* 2025-06-19 10:54:38.468326 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468329 | orchestrator | 2025-06-19 10:54:38.468333 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-19 10:54:38.468337 | orchestrator | Thursday 19 June 2025 10:54:28 +0000 (0:00:00.731) 0:00:05.961 ********* 2025-06-19 10:54:38.468340 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468344 | orchestrator | 2025-06-19 10:54:38.468348 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-19 10:54:38.468351 | orchestrator | Thursday 19 June 2025 10:54:28 +0000 (0:00:00.274) 0:00:06.235 ********* 2025-06-19 10:54:38.468355 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468359 | orchestrator | 2025-06-19 10:54:38.468362 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:54:38.468366 | orchestrator | Thursday 19 June 2025 10:54:28 +0000 (0:00:00.284) 0:00:06.519 ********* 2025-06-19 10:54:38.468370 | orchestrator | 2025-06-19 10:54:38.468374 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:54:38.468378 | orchestrator | Thursday 19 June 2025 10:54:28 +0000 (0:00:00.070) 0:00:06.590 ********* 2025-06-19 10:54:38.468382 | orchestrator | 2025-06-19 10:54:38.468386 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:54:38.468389 | orchestrator | Thursday 19 June 2025 10:54:29 +0000 (0:00:00.075) 0:00:06.665 ********* 2025-06-19 10:54:38.468393 | orchestrator | 2025-06-19 10:54:38.468401 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-19 10:54:38.468404 | orchestrator | Thursday 19 June 2025 10:54:29 +0000 (0:00:00.078) 0:00:06.744 ********* 2025-06-19 10:54:38.468408 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468412 | orchestrator | 2025-06-19 10:54:38.468415 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-19 10:54:38.468419 | orchestrator | Thursday 19 June 2025 10:54:29 +0000 (0:00:00.282) 0:00:07.026 ********* 2025-06-19 10:54:38.468423 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468426 | orchestrator | 2025-06-19 10:54:38.468440 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-19 10:54:38.468444 | orchestrator | Thursday 19 June 2025 10:54:29 +0000 (0:00:00.246) 0:00:07.273 ********* 2025-06-19 10:54:38.468447 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468451 | orchestrator | 2025-06-19 10:54:38.468455 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-19 10:54:38.468458 | orchestrator | Thursday 19 June 2025 10:54:29 +0000 (0:00:00.121) 0:00:07.395 ********* 2025-06-19 10:54:38.468462 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:54:38.468466 | orchestrator | 2025-06-19 10:54:38.468469 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-19 10:54:38.468473 | orchestrator | Thursday 19 June 2025 10:54:31 +0000 (0:00:01.633) 0:00:09.028 ********* 2025-06-19 10:54:38.468477 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468480 | orchestrator | 2025-06-19 10:54:38.468484 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-19 10:54:38.468488 | orchestrator | Thursday 19 June 2025 10:54:31 +0000 (0:00:00.336) 0:00:09.364 ********* 2025-06-19 10:54:38.468491 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468495 | orchestrator | 2025-06-19 10:54:38.468499 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-19 10:54:38.468502 | orchestrator | Thursday 19 June 2025 10:54:32 +0000 (0:00:00.311) 0:00:09.676 ********* 2025-06-19 10:54:38.468506 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468510 | orchestrator | 2025-06-19 10:54:38.468513 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-19 10:54:38.468517 | orchestrator | Thursday 19 June 2025 10:54:32 +0000 (0:00:00.332) 0:00:10.008 ********* 2025-06-19 10:54:38.468520 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468524 | orchestrator | 2025-06-19 10:54:38.468539 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-19 10:54:38.468543 | orchestrator | Thursday 19 June 2025 10:54:32 +0000 (0:00:00.341) 0:00:10.350 ********* 2025-06-19 10:54:38.468547 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468551 | orchestrator | 2025-06-19 10:54:38.468556 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-19 10:54:38.468560 | orchestrator | Thursday 19 June 2025 10:54:32 +0000 (0:00:00.121) 0:00:10.471 ********* 2025-06-19 10:54:38.468564 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468568 | orchestrator | 2025-06-19 10:54:38.468572 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-19 10:54:38.468576 | orchestrator | Thursday 19 June 2025 10:54:33 +0000 (0:00:00.147) 0:00:10.619 ********* 2025-06-19 10:54:38.468580 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468584 | orchestrator | 2025-06-19 10:54:38.468588 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-19 10:54:38.468593 | orchestrator | Thursday 19 June 2025 10:54:33 +0000 (0:00:00.136) 0:00:10.756 ********* 2025-06-19 10:54:38.468597 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:54:38.468601 | orchestrator | 2025-06-19 10:54:38.468605 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-19 10:54:38.468609 | orchestrator | Thursday 19 June 2025 10:54:34 +0000 (0:00:01.297) 0:00:12.053 ********* 2025-06-19 10:54:38.468614 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468618 | orchestrator | 2025-06-19 10:54:38.468622 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-19 10:54:38.468629 | orchestrator | Thursday 19 June 2025 10:54:34 +0000 (0:00:00.319) 0:00:12.373 ********* 2025-06-19 10:54:38.468633 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468637 | orchestrator | 2025-06-19 10:54:38.468642 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-19 10:54:38.468646 | orchestrator | Thursday 19 June 2025 10:54:34 +0000 (0:00:00.123) 0:00:12.497 ********* 2025-06-19 10:54:38.468650 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:54:38.468654 | orchestrator | 2025-06-19 10:54:38.468658 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-19 10:54:38.468662 | orchestrator | Thursday 19 June 2025 10:54:35 +0000 (0:00:00.166) 0:00:12.663 ********* 2025-06-19 10:54:38.468666 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468670 | orchestrator | 2025-06-19 10:54:38.468674 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-19 10:54:38.468678 | orchestrator | Thursday 19 June 2025 10:54:35 +0000 (0:00:00.134) 0:00:12.798 ********* 2025-06-19 10:54:38.468682 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468686 | orchestrator | 2025-06-19 10:54:38.468691 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-19 10:54:38.468695 | orchestrator | Thursday 19 June 2025 10:54:35 +0000 (0:00:00.334) 0:00:13.133 ********* 2025-06-19 10:54:38.468699 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:54:38.468703 | orchestrator | 2025-06-19 10:54:38.468709 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-19 10:54:38.468714 | orchestrator | Thursday 19 June 2025 10:54:35 +0000 (0:00:00.274) 0:00:13.407 ********* 2025-06-19 10:54:38.468718 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:54:38.468722 | orchestrator | 2025-06-19 10:54:38.468726 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-19 10:54:38.468731 | orchestrator | Thursday 19 June 2025 10:54:36 +0000 (0:00:00.247) 0:00:13.654 ********* 2025-06-19 10:54:38.468735 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:54:38.468739 | orchestrator | 2025-06-19 10:54:38.468743 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-19 10:54:38.468747 | orchestrator | Thursday 19 June 2025 10:54:37 +0000 (0:00:01.661) 0:00:15.316 ********* 2025-06-19 10:54:38.468751 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:54:38.468756 | orchestrator | 2025-06-19 10:54:38.468763 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-19 10:54:38.468767 | orchestrator | Thursday 19 June 2025 10:54:37 +0000 (0:00:00.265) 0:00:15.581 ********* 2025-06-19 10:54:38.468771 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:54:38.468775 | orchestrator | 2025-06-19 10:54:38.468782 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:54:40.653330 | orchestrator | Thursday 19 June 2025 10:54:38 +0000 (0:00:00.264) 0:00:15.846 ********* 2025-06-19 10:54:40.653403 | orchestrator | 2025-06-19 10:54:40.653409 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:54:40.653414 | orchestrator | Thursday 19 June 2025 10:54:38 +0000 (0:00:00.075) 0:00:15.922 ********* 2025-06-19 10:54:40.653418 | orchestrator | 2025-06-19 10:54:40.653422 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:54:40.653426 | orchestrator | Thursday 19 June 2025 10:54:38 +0000 (0:00:00.070) 0:00:15.992 ********* 2025-06-19 10:54:40.653429 | orchestrator | 2025-06-19 10:54:40.653433 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-19 10:54:40.653437 | orchestrator | Thursday 19 June 2025 10:54:38 +0000 (0:00:00.075) 0:00:16.067 ********* 2025-06-19 10:54:40.653441 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:54:40.653445 | orchestrator | 2025-06-19 10:54:40.653449 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-19 10:54:40.653469 | orchestrator | Thursday 19 June 2025 10:54:39 +0000 (0:00:01.265) 0:00:17.333 ********* 2025-06-19 10:54:40.653473 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-19 10:54:40.653477 | orchestrator |  "msg": [ 2025-06-19 10:54:40.653481 | orchestrator |  "Validator run completed.", 2025-06-19 10:54:40.653485 | orchestrator |  "You can find the report file here:", 2025-06-19 10:54:40.653490 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-19T10:54:23+00:00-report.json", 2025-06-19 10:54:40.653494 | orchestrator |  "on the following host:", 2025-06-19 10:54:40.653498 | orchestrator |  "testbed-manager" 2025-06-19 10:54:40.653502 | orchestrator |  ] 2025-06-19 10:54:40.653506 | orchestrator | } 2025-06-19 10:54:40.653510 | orchestrator | 2025-06-19 10:54:40.653514 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:54:40.653518 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-19 10:54:40.653523 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:54:40.653528 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:54:40.653531 | orchestrator | 2025-06-19 10:54:40.653535 | orchestrator | 2025-06-19 10:54:40.653539 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:54:40.653543 | orchestrator | Thursday 19 June 2025 10:54:40 +0000 (0:00:00.607) 0:00:17.940 ********* 2025-06-19 10:54:40.653546 | orchestrator | =============================================================================== 2025-06-19 10:54:40.653550 | orchestrator | Aggregate test results step one ----------------------------------------- 1.66s 2025-06-19 10:54:40.653553 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.63s 2025-06-19 10:54:40.653557 | orchestrator | Gather status data ------------------------------------------------------ 1.30s 2025-06-19 10:54:40.653561 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2025-06-19 10:54:40.653565 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2025-06-19 10:54:40.653569 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-06-19 10:54:40.653572 | orchestrator | Aggregate test results step one ----------------------------------------- 0.73s 2025-06-19 10:54:40.653576 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-06-19 10:54:40.653580 | orchestrator | Print report file information ------------------------------------------- 0.61s 2025-06-19 10:54:40.653584 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2025-06-19 10:54:40.653587 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.36s 2025-06-19 10:54:40.653591 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.34s 2025-06-19 10:54:40.653595 | orchestrator | Set quorum test data ---------------------------------------------------- 0.34s 2025-06-19 10:54:40.653598 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2025-06-19 10:54:40.653602 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-06-19 10:54:40.653606 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2025-06-19 10:54:40.653610 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2025-06-19 10:54:40.653613 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.31s 2025-06-19 10:54:40.653617 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-06-19 10:54:40.653621 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-06-19 10:54:40.873431 | orchestrator | + osism validate ceph-mgrs 2025-06-19 10:54:42.513758 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:54:42.513828 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:54:42.513835 | orchestrator | Registering Redlock._release_script 2025-06-19 10:55:01.216071 | orchestrator | 2025-06-19 10:55:01.216220 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-19 10:55:01.216238 | orchestrator | 2025-06-19 10:55:01.216250 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-19 10:55:01.216262 | orchestrator | Thursday 19 June 2025 10:54:46 +0000 (0:00:00.470) 0:00:00.470 ********* 2025-06-19 10:55:01.216284 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:01.216295 | orchestrator | 2025-06-19 10:55:01.216306 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-19 10:55:01.216317 | orchestrator | Thursday 19 June 2025 10:54:47 +0000 (0:00:00.648) 0:00:01.118 ********* 2025-06-19 10:55:01.216327 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:01.216338 | orchestrator | 2025-06-19 10:55:01.216348 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-19 10:55:01.216359 | orchestrator | Thursday 19 June 2025 10:54:48 +0000 (0:00:00.787) 0:00:01.906 ********* 2025-06-19 10:55:01.216370 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:55:01.216381 | orchestrator | 2025-06-19 10:55:01.216392 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-19 10:55:01.216403 | orchestrator | Thursday 19 June 2025 10:54:48 +0000 (0:00:00.230) 0:00:02.136 ********* 2025-06-19 10:55:01.216415 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:55:01.216426 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:55:01.216436 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:55:01.216447 | orchestrator | 2025-06-19 10:55:01.216458 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-19 10:55:01.216468 | orchestrator | Thursday 19 June 2025 10:54:48 +0000 (0:00:00.295) 0:00:02.432 ********* 2025-06-19 10:55:01.216479 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:55:01.216489 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:55:01.216499 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:55:01.216510 | orchestrator | 2025-06-19 10:55:01.216520 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-19 10:55:01.216531 | orchestrator | Thursday 19 June 2025 10:54:49 +0000 (0:00:00.965) 0:00:03.397 ********* 2025-06-19 10:55:01.216542 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:55:01.216552 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:55:01.216563 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:55:01.216573 | orchestrator | 2025-06-19 10:55:01.216584 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-19 10:55:01.216595 | orchestrator | Thursday 19 June 2025 10:54:50 +0000 (0:00:00.283) 0:00:03.681 ********* 2025-06-19 10:55:01.216607 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:55:01.216619 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:55:01.216631 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:55:01.216643 | orchestrator | 2025-06-19 10:55:01.216655 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-19 10:55:01.216667 | orchestrator | Thursday 19 June 2025 10:54:50 +0000 (0:00:00.519) 0:00:04.200 ********* 2025-06-19 10:55:01.216679 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:55:01.216691 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:55:01.216702 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:55:01.216714 | orchestrator | 2025-06-19 10:55:01.216726 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-19 10:55:01.216739 | orchestrator | Thursday 19 June 2025 10:54:51 +0000 (0:00:00.315) 0:00:04.516 ********* 2025-06-19 10:55:01.216751 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:55:01.216763 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:55:01.216775 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:55:01.216786 | orchestrator | 2025-06-19 10:55:01.216799 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-19 10:55:01.216836 | orchestrator | Thursday 19 June 2025 10:54:51 +0000 (0:00:00.332) 0:00:04.848 ********* 2025-06-19 10:55:01.216849 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:55:01.216861 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:55:01.216873 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:55:01.216885 | orchestrator | 2025-06-19 10:55:01.216897 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-19 10:55:01.216909 | orchestrator | Thursday 19 June 2025 10:54:51 +0000 (0:00:00.311) 0:00:05.160 ********* 2025-06-19 10:55:01.216921 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:55:01.216933 | orchestrator | 2025-06-19 10:55:01.216946 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-19 10:55:01.216957 | orchestrator | Thursday 19 June 2025 10:54:52 +0000 (0:00:00.630) 0:00:05.790 ********* 2025-06-19 10:55:01.216968 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:55:01.216978 | orchestrator | 2025-06-19 10:55:01.216989 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-19 10:55:01.216999 | orchestrator | Thursday 19 June 2025 10:54:52 +0000 (0:00:00.245) 0:00:06.035 ********* 2025-06-19 10:55:01.217010 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:55:01.217021 | orchestrator | 2025-06-19 10:55:01.217032 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:01.217043 | orchestrator | Thursday 19 June 2025 10:54:52 +0000 (0:00:00.246) 0:00:06.282 ********* 2025-06-19 10:55:01.217053 | orchestrator | 2025-06-19 10:55:01.217063 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:01.217092 | orchestrator | Thursday 19 June 2025 10:54:52 +0000 (0:00:00.074) 0:00:06.356 ********* 2025-06-19 10:55:01.217103 | orchestrator | 2025-06-19 10:55:01.217119 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:01.217130 | orchestrator | Thursday 19 June 2025 10:54:52 +0000 (0:00:00.071) 0:00:06.427 ********* 2025-06-19 10:55:01.217158 | orchestrator | 2025-06-19 10:55:01.217169 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-19 10:55:01.217180 | orchestrator | Thursday 19 June 2025 10:54:53 +0000 (0:00:00.076) 0:00:06.504 ********* 2025-06-19 10:55:01.217191 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:55:01.217201 | orchestrator | 2025-06-19 10:55:01.217212 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-19 10:55:01.217223 | orchestrator | Thursday 19 June 2025 10:54:53 +0000 (0:00:00.241) 0:00:06.746 ********* 2025-06-19 10:55:01.217233 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:55:01.217244 | orchestrator | 2025-06-19 10:55:01.217271 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-19 10:55:01.217282 | orchestrator | Thursday 19 June 2025 10:54:53 +0000 (0:00:00.245) 0:00:06.992 ********* 2025-06-19 10:55:01.217293 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:55:01.217303 | orchestrator | 2025-06-19 10:55:01.217314 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-19 10:55:01.217324 | orchestrator | Thursday 19 June 2025 10:54:53 +0000 (0:00:00.122) 0:00:07.114 ********* 2025-06-19 10:55:01.217335 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:55:01.217345 | orchestrator | 2025-06-19 10:55:01.217356 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-19 10:55:01.217367 | orchestrator | Thursday 19 June 2025 10:54:55 +0000 (0:00:01.956) 0:00:09.071 ********* 2025-06-19 10:55:01.217377 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:55:01.217388 | orchestrator | 2025-06-19 10:55:01.217398 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-19 10:55:01.217409 | orchestrator | Thursday 19 June 2025 10:54:55 +0000 (0:00:00.272) 0:00:09.343 ********* 2025-06-19 10:55:01.217419 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:55:01.217429 | orchestrator | 2025-06-19 10:55:01.217440 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-19 10:55:01.217459 | orchestrator | Thursday 19 June 2025 10:54:56 +0000 (0:00:00.477) 0:00:09.821 ********* 2025-06-19 10:55:01.217469 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:55:01.217480 | orchestrator | 2025-06-19 10:55:01.217491 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-19 10:55:01.217501 | orchestrator | Thursday 19 June 2025 10:54:56 +0000 (0:00:00.144) 0:00:09.965 ********* 2025-06-19 10:55:01.217512 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:55:01.217522 | orchestrator | 2025-06-19 10:55:01.217533 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-19 10:55:01.217543 | orchestrator | Thursday 19 June 2025 10:54:56 +0000 (0:00:00.156) 0:00:10.121 ********* 2025-06-19 10:55:01.217554 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:01.217564 | orchestrator | 2025-06-19 10:55:01.217575 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-19 10:55:01.217585 | orchestrator | Thursday 19 June 2025 10:54:56 +0000 (0:00:00.254) 0:00:10.376 ********* 2025-06-19 10:55:01.217596 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:55:01.217606 | orchestrator | 2025-06-19 10:55:01.217616 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-19 10:55:01.217627 | orchestrator | Thursday 19 June 2025 10:54:57 +0000 (0:00:00.245) 0:00:10.621 ********* 2025-06-19 10:55:01.217637 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:01.217648 | orchestrator | 2025-06-19 10:55:01.217658 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-19 10:55:01.217669 | orchestrator | Thursday 19 June 2025 10:54:58 +0000 (0:00:01.273) 0:00:11.895 ********* 2025-06-19 10:55:01.217679 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:01.217690 | orchestrator | 2025-06-19 10:55:01.217700 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-19 10:55:01.217711 | orchestrator | Thursday 19 June 2025 10:54:58 +0000 (0:00:00.249) 0:00:12.144 ********* 2025-06-19 10:55:01.217721 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:01.217732 | orchestrator | 2025-06-19 10:55:01.217742 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:01.217753 | orchestrator | Thursday 19 June 2025 10:54:58 +0000 (0:00:00.245) 0:00:12.389 ********* 2025-06-19 10:55:01.217763 | orchestrator | 2025-06-19 10:55:01.217773 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:01.217784 | orchestrator | Thursday 19 June 2025 10:54:58 +0000 (0:00:00.083) 0:00:12.473 ********* 2025-06-19 10:55:01.217795 | orchestrator | 2025-06-19 10:55:01.217805 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:01.217815 | orchestrator | Thursday 19 June 2025 10:54:59 +0000 (0:00:00.070) 0:00:12.543 ********* 2025-06-19 10:55:01.217826 | orchestrator | 2025-06-19 10:55:01.217836 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-19 10:55:01.217847 | orchestrator | Thursday 19 June 2025 10:54:59 +0000 (0:00:00.075) 0:00:12.618 ********* 2025-06-19 10:55:01.217857 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:01.217867 | orchestrator | 2025-06-19 10:55:01.217878 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-19 10:55:01.217888 | orchestrator | Thursday 19 June 2025 10:55:00 +0000 (0:00:01.675) 0:00:14.293 ********* 2025-06-19 10:55:01.217899 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-19 10:55:01.217909 | orchestrator |  "msg": [ 2025-06-19 10:55:01.217920 | orchestrator |  "Validator run completed.", 2025-06-19 10:55:01.217931 | orchestrator |  "You can find the report file here:", 2025-06-19 10:55:01.217941 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-19T10:54:47+00:00-report.json", 2025-06-19 10:55:01.217953 | orchestrator |  "on the following host:", 2025-06-19 10:55:01.217968 | orchestrator |  "testbed-manager" 2025-06-19 10:55:01.217988 | orchestrator |  ] 2025-06-19 10:55:01.217999 | orchestrator | } 2025-06-19 10:55:01.218010 | orchestrator | 2025-06-19 10:55:01.218075 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:55:01.218087 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-19 10:55:01.218099 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:55:01.218119 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:55:01.512642 | orchestrator | 2025-06-19 10:55:01.512738 | orchestrator | 2025-06-19 10:55:01.512752 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:55:01.512765 | orchestrator | Thursday 19 June 2025 10:55:01 +0000 (0:00:00.409) 0:00:14.703 ********* 2025-06-19 10:55:01.512776 | orchestrator | =============================================================================== 2025-06-19 10:55:01.512787 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.96s 2025-06-19 10:55:01.512797 | orchestrator | Write report file ------------------------------------------------------- 1.68s 2025-06-19 10:55:01.512808 | orchestrator | Aggregate test results step one ----------------------------------------- 1.27s 2025-06-19 10:55:01.512818 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-06-19 10:55:01.512829 | orchestrator | Create report output directory ------------------------------------------ 0.79s 2025-06-19 10:55:01.512839 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-06-19 10:55:01.512850 | orchestrator | Aggregate test results step one ----------------------------------------- 0.63s 2025-06-19 10:55:01.512860 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2025-06-19 10:55:01.512871 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.48s 2025-06-19 10:55:01.512881 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-06-19 10:55:01.512892 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.33s 2025-06-19 10:55:01.512902 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-06-19 10:55:01.512913 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-06-19 10:55:01.512923 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-06-19 10:55:01.512934 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-06-19 10:55:01.512944 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.27s 2025-06-19 10:55:01.512955 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.25s 2025-06-19 10:55:01.512965 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-06-19 10:55:01.512976 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2025-06-19 10:55:01.512986 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2025-06-19 10:55:01.743993 | orchestrator | + osism validate ceph-osds 2025-06-19 10:55:03.416196 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:55:03.416297 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:55:03.416314 | orchestrator | Registering Redlock._release_script 2025-06-19 10:55:12.066861 | orchestrator | 2025-06-19 10:55:12.066953 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-19 10:55:12.066970 | orchestrator | 2025-06-19 10:55:12.066981 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-19 10:55:12.066992 | orchestrator | Thursday 19 June 2025 10:55:07 +0000 (0:00:00.424) 0:00:00.424 ********* 2025-06-19 10:55:12.067003 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:12.067034 | orchestrator | 2025-06-19 10:55:12.067045 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-19 10:55:12.067056 | orchestrator | Thursday 19 June 2025 10:55:08 +0000 (0:00:00.648) 0:00:01.073 ********* 2025-06-19 10:55:12.067066 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:12.067077 | orchestrator | 2025-06-19 10:55:12.067088 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-19 10:55:12.067098 | orchestrator | Thursday 19 June 2025 10:55:08 +0000 (0:00:00.404) 0:00:01.478 ********* 2025-06-19 10:55:12.067109 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:12.067120 | orchestrator | 2025-06-19 10:55:12.067160 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-19 10:55:12.067172 | orchestrator | Thursday 19 June 2025 10:55:09 +0000 (0:00:00.947) 0:00:02.425 ********* 2025-06-19 10:55:12.067183 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:12.067195 | orchestrator | 2025-06-19 10:55:12.067206 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-19 10:55:12.067216 | orchestrator | Thursday 19 June 2025 10:55:09 +0000 (0:00:00.135) 0:00:02.561 ********* 2025-06-19 10:55:12.067227 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:12.067238 | orchestrator | 2025-06-19 10:55:12.067248 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-19 10:55:12.067259 | orchestrator | Thursday 19 June 2025 10:55:10 +0000 (0:00:00.148) 0:00:02.709 ********* 2025-06-19 10:55:12.067269 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:12.067280 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:55:12.067290 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:55:12.067301 | orchestrator | 2025-06-19 10:55:12.067311 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-19 10:55:12.067322 | orchestrator | Thursday 19 June 2025 10:55:10 +0000 (0:00:00.308) 0:00:03.018 ********* 2025-06-19 10:55:12.067332 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:12.067343 | orchestrator | 2025-06-19 10:55:12.067354 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-19 10:55:12.067364 | orchestrator | Thursday 19 June 2025 10:55:10 +0000 (0:00:00.144) 0:00:03.163 ********* 2025-06-19 10:55:12.067374 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:12.067385 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:12.067396 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:12.067408 | orchestrator | 2025-06-19 10:55:12.067420 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-19 10:55:12.067433 | orchestrator | Thursday 19 June 2025 10:55:10 +0000 (0:00:00.312) 0:00:03.476 ********* 2025-06-19 10:55:12.067445 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:12.067457 | orchestrator | 2025-06-19 10:55:12.067469 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-19 10:55:12.067482 | orchestrator | Thursday 19 June 2025 10:55:11 +0000 (0:00:00.552) 0:00:04.028 ********* 2025-06-19 10:55:12.067494 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:12.067506 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:12.067519 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:12.067531 | orchestrator | 2025-06-19 10:55:12.067543 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-19 10:55:12.067556 | orchestrator | Thursday 19 June 2025 10:55:11 +0000 (0:00:00.467) 0:00:04.495 ********* 2025-06-19 10:55:12.067570 | orchestrator | skipping: [testbed-node-3] => (item={'id': '693f54adeabb9a8ba2afa15fbedc5e1cfa018661e6bac20de523a37b182d1bf3', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-19 10:55:12.067585 | orchestrator | skipping: [testbed-node-3] => (item={'id': '377be51d6cbe351810e632fab4716e0149aca0755dffbbae388e03e4b4d57ab8', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-19 10:55:12.067607 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1ed4ee14f396b716dac90d5aa7e519f694f4dfc1ab2f087bbd73cede3c055d51', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-19 10:55:12.067622 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c72b4ebdfe0619bd6d2c008ac2d5892ea6efe694114293664185ece010dd4bc3', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-19 10:55:12.067645 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'be8d3a9c1af5e00fde8b07d0e9e7e94bfac634933809f70c5d56d8dc2530f1c2', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-19 10:55:12.067674 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3397640286aa9477b47beed875058b8278b7067b9a31dd359be3b7e00f11c12f', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-19 10:55:12.067688 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'feed00fefe1d3a2d284689322da66acd0f6f897c561e4b0224035b499a5ae72d', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-19 10:55:12.067702 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3103103d7ad9c407c5a787fa320491f6190d1f8455d562bc1117f9e5261bd22b', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-19 10:55:12.067724 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ec645e5af4ed4b939ff8656286aa80e6ccd15279cedf2f052558d9f6a5108ab6', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-19 10:55:12.067742 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f7385e468d53cf511e60c9471f7b97390cd1e89e73923834b56f307479e70ca4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-19 10:55:12.067754 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c063f2bdbc0c71b5ec9b62b43dceb9754a434a2dd417107da0095de5bf4e83b4', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-19 10:55:12.067770 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bec342e6cfa0a3e74a97b2f13a9a4e30c4ec6386ac974a2caf3a178c66cfaae0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-19 10:55:12.067781 | orchestrator | ok: [testbed-node-3] => (item={'id': 'da5cfcd1df6f85b4ea1304d87bbcaaa00adbe7083842d294fda34ba60ce49a37', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-19 10:55:12.067793 | orchestrator | ok: [testbed-node-3] => (item={'id': 'd19a1cfe3e6f58027552e676fc93e3a41abd7bf458fecf0fd35e1fa6173c82ca', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-19 10:55:12.067803 | orchestrator | skipping: [testbed-node-3] => (item={'id': '240940a91c56cee967b6c8fd1fe17046c0a379a8c80d78351798d7b4b97ec8f0', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-19 10:55:12.067814 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2f3e5d87cf3d7384af06fab5b5b0b870c2a09da3fbd8d840e2d42633a37df8cc', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-19 10:55:12.067832 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cd47ccc4278b73cec3a05f4110652af5dd4bf3b933198618201138c8a8cb2628', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-19 10:55:12.067843 | orchestrator | skipping: [testbed-node-4] => (item={'id': '86b15bec148fabbcdb44e4291ab852cc61d2e980df0e12f26b77238bc3949133', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-19 10:55:12.067855 | orchestrator | skipping: [testbed-node-3] => (item={'id': '7ec07b774ddfd911af7e11c817aece0595ebb6f8c597fe2f20dae15fef458c9d', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-19 10:55:12.067866 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6650e8fed35af354c9e3da75881c0c198adc2abb86c2b814a65fe47508cd3fab', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-19 10:55:12.067877 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'dd332c47b7cfdcf50dd87e99ec9471df10fd7c0abb534c8f8cedf23a3e4cfa99', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-19 10:55:12.067894 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fb61c0fe2cff41654ce12b7f276b9fe378309f8b1473bb0f036f73bc09f84673', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-19 10:55:12.366606 | orchestrator | skipping: [testbed-node-3] => (item={'id': '63e31494fc83867dbeab14230fcbcab293920a4d22d73cd7bfe13b4cb3dc7f55', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-19 10:55:12.366712 | orchestrator | skipping: [testbed-node-4] => (item={'id': '681a6f4bf2fe4c1f1f2957fb6c4163a43979e19773cffd7b88748d31e5bf2b5a', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-19 10:55:12.366730 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3c4401b1ded9351fbc0da29afc557868b4852130daab47ad31781dfeb96e301f', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-19 10:55:12.366742 | orchestrator | skipping: [testbed-node-4] => (item={'id': '082f7019b294e8b22aae229b20c21cbe5bccdd5a528f0bd9c7a3e4af33673bd1', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-19 10:55:12.366754 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8bed8450d1a2b95f124e157057e373cd0cff935686ccfa7c0d3df8975f1f7978', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-19 10:55:12.366783 | orchestrator | skipping: [testbed-node-4] => (item={'id': '83016eb3d094f8234b505506bdf4d8beceaeef5a65325cb8ded600a420f5a963', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-19 10:55:12.366795 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6fd7c208c13c568da543def58ee06e12141658be3763257e0b05cabe73347f44', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-19 10:55:12.366806 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c59c3de08b7036db8c3bc741b588c8c4de696b60444e279ceafd0f6f28e8555e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-19 10:55:12.366839 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b252897989d67f77622a2f65df292f56fc9ab0daa658e2d08a6058eca4389e2a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-19 10:55:12.366851 | orchestrator | skipping: [testbed-node-4] => (item={'id': '94df54f7bad89d6043abe555b2c37f4de723b6900de612742c6207705cead423', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-19 10:55:12.366863 | orchestrator | ok: [testbed-node-4] => (item={'id': 'a3662aa508f9d17660b5c54e3df19ff72bb45db182eb83a3561e84aa28e13b3b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-19 10:55:12.366875 | orchestrator | ok: [testbed-node-4] => (item={'id': '3c876f160ffca36b216137767a08f9abc390417c23bd29b4db37a5472c45e41f', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-19 10:55:12.366886 | orchestrator | skipping: [testbed-node-4] => (item={'id': '303061143dd6746dc2896c3a30b84c76996fcb4dd0cfcc6b15ce49c743694fdb', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-19 10:55:12.366897 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3a0ff75465ceb8925df3825017589c1f56cf3ca060e03cc14b0ac6eb1cb34486', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-19 10:55:12.366909 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'eeb985d2bcb4c592cbd507cae330f334cbb5efc16ae0bf6f05a0686b75a942ff', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-19 10:55:12.366939 | orchestrator | skipping: [testbed-node-4] => (item={'id': '50c33b166af77e3a282d07a26f11766b96e1b8257b14b6766c87702de40b9190', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-19 10:55:12.366950 | orchestrator | skipping: [testbed-node-4] => (item={'id': '56f35cd92b47f3906d52d7684656b4ac033749c6bf5d52d507debc6ebcad48e7', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-19 10:55:12.366961 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ccaf8663d830fcecd4c8c99189d02102993dc8413e8482d7eb82d81bb09ba109', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-19 10:55:12.366973 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6112c267967aa4dd14bed3dbf1d588fa6a07516e9f80fbbf5b1b95d8b2f265ed', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-19 10:55:12.366984 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e7cbec88daacac5c3585ab35a48db6a5b1b117aa8c35e9cae9687a1ba48dcc24', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-19 10:55:12.366995 | orchestrator | skipping: [testbed-node-5] => (item={'id': '442a321652d57e86b1672a3a08ffe8a357f006f37ddd10c138405ac8d272f693', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-19 10:55:12.367006 | orchestrator | skipping: [testbed-node-5] => (item={'id': '945f504d710c1033ba4f22887dc395dadb04de09050d1b36ddf01ee4b82090bd', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-19 10:55:12.367024 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e81b27739276ef3b8d8bc77e06c6719d6f0dfa8ef14ee29f237c4642b571e491', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-19 10:55:12.367035 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cf56b4bdecfa7fd95d915f0b929472e3727dc5a3c573101ceae1f1b5e5f71cb3', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 12 minutes (healthy)'})  2025-06-19 10:55:12.367046 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c636b295629440fdd4abf9a8efee1a6c617eab36e2ab20fb5a6491c053e3403e', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-19 10:55:12.367057 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6fd0122e8d869fad436c27bd545e4a5ee8f0d476c4ad35ea7abb8e3fa842deed', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-19 10:55:12.367068 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8c1e293597d40e871731cb586ded1a55ee495a3d9c62be1e229aa6515cdffa3b', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-19 10:55:12.367079 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6f0ce24a3dc60860d3b54434eaff1c60f8ed626e62de51e9f82df708f153b2e5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 20 minutes'})  2025-06-19 10:55:12.367090 | orchestrator | skipping: [testbed-node-5] => (item={'id': '86b242f884d90e6c7076a0eda382057f444f5de3b804837ab7dd6ce221199dd5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-19 10:55:12.367101 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8b86fd31ecd3e3a7a76965370258e0ebb1a49d4eda01ce11f0551a4a08de2ea2', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-19 10:55:12.367112 | orchestrator | ok: [testbed-node-5] => (item={'id': '2f406905cb56f2abf5a5a33f72c30687d6ecd7fb70c1360b26179b3553ccd267', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-19 10:55:12.367160 | orchestrator | ok: [testbed-node-5] => (item={'id': 'cb7d19d2b4e36e0792610fabd948190291e76d376d7a70418f0350c4be478a46', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 23 minutes'}) 2025-06-19 10:55:23.382611 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6e7207e5fa90abccef99a7bb8617fdd20ec145cb7f44b5f3ae0e06179b55d8ca', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 26 minutes'})  2025-06-19 10:55:23.382726 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1fe64720a21f2b65b0b855732cd4a4edf16e05c5d894bba0242c4c59faefdd29', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 27 minutes (healthy)'})  2025-06-19 10:55:23.382743 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a60daf4834cd8cfec6b5fe2cff61cfe40a218c2fab3139a2a512751df23c8da6', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 28 minutes (healthy)'})  2025-06-19 10:55:23.382775 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2d73a661e7961f127e208b6097a50ad012a622ac3cbda59216c44a6552d227b4', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-19 10:55:23.382813 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7dd14d621f4481940589ab63efd18c1563c1f25349053c60fb31e9ea1e7c5905', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-19 10:55:23.382827 | orchestrator | skipping: [testbed-node-5] => (item={'id': '05c05a5fb2df8b2fe966f26886318ebc253416635ff00f717d5e5f4e4f4c1d28', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-19 10:55:23.382838 | orchestrator | 2025-06-19 10:55:23.382850 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-19 10:55:23.382862 | orchestrator | Thursday 19 June 2025 10:55:12 +0000 (0:00:00.515) 0:00:05.010 ********* 2025-06-19 10:55:23.382873 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.382884 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:23.382895 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:23.382905 | orchestrator | 2025-06-19 10:55:23.382916 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-19 10:55:23.382927 | orchestrator | Thursday 19 June 2025 10:55:12 +0000 (0:00:00.278) 0:00:05.289 ********* 2025-06-19 10:55:23.382938 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:23.382949 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:55:23.382959 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:55:23.382970 | orchestrator | 2025-06-19 10:55:23.382981 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-19 10:55:23.382992 | orchestrator | Thursday 19 June 2025 10:55:13 +0000 (0:00:00.452) 0:00:05.742 ********* 2025-06-19 10:55:23.383002 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.383013 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:23.383023 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:23.383034 | orchestrator | 2025-06-19 10:55:23.383044 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-19 10:55:23.383055 | orchestrator | Thursday 19 June 2025 10:55:13 +0000 (0:00:00.308) 0:00:06.050 ********* 2025-06-19 10:55:23.383066 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.383077 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:23.383088 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:23.383098 | orchestrator | 2025-06-19 10:55:23.383109 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-19 10:55:23.383150 | orchestrator | Thursday 19 June 2025 10:55:13 +0000 (0:00:00.299) 0:00:06.349 ********* 2025-06-19 10:55:23.383162 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-19 10:55:23.383175 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-19 10:55:23.383187 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:23.383199 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-19 10:55:23.383211 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-19 10:55:23.383223 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:55:23.383235 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-19 10:55:23.383246 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-19 10:55:23.383259 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:55:23.383271 | orchestrator | 2025-06-19 10:55:23.383282 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-19 10:55:23.383294 | orchestrator | Thursday 19 June 2025 10:55:14 +0000 (0:00:00.310) 0:00:06.660 ********* 2025-06-19 10:55:23.383307 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.383319 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:23.383332 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:23.383343 | orchestrator | 2025-06-19 10:55:23.383355 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-19 10:55:23.383375 | orchestrator | Thursday 19 June 2025 10:55:14 +0000 (0:00:00.504) 0:00:07.165 ********* 2025-06-19 10:55:23.383387 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:23.383399 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:55:23.383411 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:55:23.383422 | orchestrator | 2025-06-19 10:55:23.383450 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-19 10:55:23.383464 | orchestrator | Thursday 19 June 2025 10:55:14 +0000 (0:00:00.322) 0:00:07.487 ********* 2025-06-19 10:55:23.383476 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:23.383488 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:55:23.383500 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:55:23.383511 | orchestrator | 2025-06-19 10:55:23.383521 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-19 10:55:23.383532 | orchestrator | Thursday 19 June 2025 10:55:15 +0000 (0:00:00.341) 0:00:07.829 ********* 2025-06-19 10:55:23.383542 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.383553 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:23.383563 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:23.383574 | orchestrator | 2025-06-19 10:55:23.383584 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-19 10:55:23.383595 | orchestrator | Thursday 19 June 2025 10:55:15 +0000 (0:00:00.288) 0:00:08.118 ********* 2025-06-19 10:55:23.383606 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:23.383616 | orchestrator | 2025-06-19 10:55:23.383627 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-19 10:55:23.383637 | orchestrator | Thursday 19 June 2025 10:55:16 +0000 (0:00:00.628) 0:00:08.747 ********* 2025-06-19 10:55:23.383648 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:23.383658 | orchestrator | 2025-06-19 10:55:23.383669 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-19 10:55:23.383679 | orchestrator | Thursday 19 June 2025 10:55:16 +0000 (0:00:00.257) 0:00:09.004 ********* 2025-06-19 10:55:23.383690 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:23.383700 | orchestrator | 2025-06-19 10:55:23.383716 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:23.383727 | orchestrator | Thursday 19 June 2025 10:55:16 +0000 (0:00:00.246) 0:00:09.250 ********* 2025-06-19 10:55:23.383737 | orchestrator | 2025-06-19 10:55:23.383748 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:23.383758 | orchestrator | Thursday 19 June 2025 10:55:16 +0000 (0:00:00.073) 0:00:09.324 ********* 2025-06-19 10:55:23.383769 | orchestrator | 2025-06-19 10:55:23.383779 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:23.383790 | orchestrator | Thursday 19 June 2025 10:55:16 +0000 (0:00:00.067) 0:00:09.391 ********* 2025-06-19 10:55:23.383800 | orchestrator | 2025-06-19 10:55:23.383811 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-19 10:55:23.383822 | orchestrator | Thursday 19 June 2025 10:55:16 +0000 (0:00:00.067) 0:00:09.459 ********* 2025-06-19 10:55:23.383832 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:23.383843 | orchestrator | 2025-06-19 10:55:23.383853 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-19 10:55:23.383864 | orchestrator | Thursday 19 June 2025 10:55:17 +0000 (0:00:00.256) 0:00:09.716 ********* 2025-06-19 10:55:23.383874 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:23.383885 | orchestrator | 2025-06-19 10:55:23.383895 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-19 10:55:23.383906 | orchestrator | Thursday 19 June 2025 10:55:17 +0000 (0:00:00.238) 0:00:09.955 ********* 2025-06-19 10:55:23.383917 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.383927 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:23.383938 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:23.383948 | orchestrator | 2025-06-19 10:55:23.383959 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-19 10:55:23.383976 | orchestrator | Thursday 19 June 2025 10:55:17 +0000 (0:00:00.289) 0:00:10.244 ********* 2025-06-19 10:55:23.383987 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.383998 | orchestrator | 2025-06-19 10:55:23.384008 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-19 10:55:23.384019 | orchestrator | Thursday 19 June 2025 10:55:18 +0000 (0:00:00.624) 0:00:10.868 ********* 2025-06-19 10:55:23.384029 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-19 10:55:23.384040 | orchestrator | 2025-06-19 10:55:23.384051 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-19 10:55:23.384061 | orchestrator | Thursday 19 June 2025 10:55:19 +0000 (0:00:01.578) 0:00:12.447 ********* 2025-06-19 10:55:23.384072 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.384083 | orchestrator | 2025-06-19 10:55:23.384093 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-19 10:55:23.384104 | orchestrator | Thursday 19 June 2025 10:55:19 +0000 (0:00:00.133) 0:00:12.580 ********* 2025-06-19 10:55:23.384142 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.384154 | orchestrator | 2025-06-19 10:55:23.384164 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-19 10:55:23.384175 | orchestrator | Thursday 19 June 2025 10:55:20 +0000 (0:00:00.298) 0:00:12.878 ********* 2025-06-19 10:55:23.384185 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:23.384196 | orchestrator | 2025-06-19 10:55:23.384206 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-19 10:55:23.384217 | orchestrator | Thursday 19 June 2025 10:55:20 +0000 (0:00:00.125) 0:00:13.004 ********* 2025-06-19 10:55:23.384227 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.384238 | orchestrator | 2025-06-19 10:55:23.384249 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-19 10:55:23.384259 | orchestrator | Thursday 19 June 2025 10:55:20 +0000 (0:00:00.130) 0:00:13.134 ********* 2025-06-19 10:55:23.384270 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:23.384280 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:23.384291 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:23.384301 | orchestrator | 2025-06-19 10:55:23.384312 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-19 10:55:23.384322 | orchestrator | Thursday 19 June 2025 10:55:20 +0000 (0:00:00.282) 0:00:13.416 ********* 2025-06-19 10:55:23.384333 | orchestrator | changed: [testbed-node-4] 2025-06-19 10:55:23.384343 | orchestrator | changed: [testbed-node-3] 2025-06-19 10:55:23.384354 | orchestrator | changed: [testbed-node-5] 2025-06-19 10:55:23.384364 | orchestrator | 2025-06-19 10:55:23.384375 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-19 10:55:23.384392 | orchestrator | Thursday 19 June 2025 10:55:23 +0000 (0:00:02.610) 0:00:16.027 ********* 2025-06-19 10:55:33.186337 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:33.186451 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:33.186467 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:33.186478 | orchestrator | 2025-06-19 10:55:33.186492 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-19 10:55:33.186504 | orchestrator | Thursday 19 June 2025 10:55:23 +0000 (0:00:00.300) 0:00:16.328 ********* 2025-06-19 10:55:33.186515 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:33.186526 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:33.186536 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:33.186547 | orchestrator | 2025-06-19 10:55:33.186558 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-19 10:55:33.186568 | orchestrator | Thursday 19 June 2025 10:55:24 +0000 (0:00:00.484) 0:00:16.812 ********* 2025-06-19 10:55:33.186579 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:33.186591 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:55:33.186601 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:55:33.186612 | orchestrator | 2025-06-19 10:55:33.186622 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-19 10:55:33.186656 | orchestrator | Thursday 19 June 2025 10:55:24 +0000 (0:00:00.302) 0:00:17.115 ********* 2025-06-19 10:55:33.186668 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:33.186678 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:33.186689 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:33.186699 | orchestrator | 2025-06-19 10:55:33.186710 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-19 10:55:33.186720 | orchestrator | Thursday 19 June 2025 10:55:24 +0000 (0:00:00.468) 0:00:17.583 ********* 2025-06-19 10:55:33.186746 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:33.186757 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:55:33.186767 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:55:33.186778 | orchestrator | 2025-06-19 10:55:33.186788 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-19 10:55:33.186799 | orchestrator | Thursday 19 June 2025 10:55:25 +0000 (0:00:00.300) 0:00:17.884 ********* 2025-06-19 10:55:33.186809 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:33.186820 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:55:33.186830 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:55:33.186841 | orchestrator | 2025-06-19 10:55:33.186851 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-19 10:55:33.186862 | orchestrator | Thursday 19 June 2025 10:55:25 +0000 (0:00:00.298) 0:00:18.182 ********* 2025-06-19 10:55:33.186874 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:33.186885 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:33.186897 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:33.186908 | orchestrator | 2025-06-19 10:55:33.186920 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-19 10:55:33.186931 | orchestrator | Thursday 19 June 2025 10:55:26 +0000 (0:00:00.521) 0:00:18.703 ********* 2025-06-19 10:55:33.186944 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:33.186955 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:33.186966 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:33.186978 | orchestrator | 2025-06-19 10:55:33.186990 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-19 10:55:33.187002 | orchestrator | Thursday 19 June 2025 10:55:26 +0000 (0:00:00.718) 0:00:19.421 ********* 2025-06-19 10:55:33.187014 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:33.187026 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:33.187038 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:33.187049 | orchestrator | 2025-06-19 10:55:33.187061 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-19 10:55:33.187073 | orchestrator | Thursday 19 June 2025 10:55:27 +0000 (0:00:00.328) 0:00:19.750 ********* 2025-06-19 10:55:33.187085 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:33.187122 | orchestrator | skipping: [testbed-node-4] 2025-06-19 10:55:33.187134 | orchestrator | skipping: [testbed-node-5] 2025-06-19 10:55:33.187147 | orchestrator | 2025-06-19 10:55:33.187159 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-19 10:55:33.187171 | orchestrator | Thursday 19 June 2025 10:55:27 +0000 (0:00:00.292) 0:00:20.042 ********* 2025-06-19 10:55:33.187183 | orchestrator | ok: [testbed-node-3] 2025-06-19 10:55:33.187195 | orchestrator | ok: [testbed-node-4] 2025-06-19 10:55:33.187207 | orchestrator | ok: [testbed-node-5] 2025-06-19 10:55:33.187219 | orchestrator | 2025-06-19 10:55:33.187230 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-19 10:55:33.187240 | orchestrator | Thursday 19 June 2025 10:55:27 +0000 (0:00:00.305) 0:00:20.348 ********* 2025-06-19 10:55:33.187251 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:33.187262 | orchestrator | 2025-06-19 10:55:33.187272 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-19 10:55:33.187283 | orchestrator | Thursday 19 June 2025 10:55:28 +0000 (0:00:00.686) 0:00:21.035 ********* 2025-06-19 10:55:33.187293 | orchestrator | skipping: [testbed-node-3] 2025-06-19 10:55:33.187311 | orchestrator | 2025-06-19 10:55:33.187321 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-19 10:55:33.187332 | orchestrator | Thursday 19 June 2025 10:55:28 +0000 (0:00:00.285) 0:00:21.321 ********* 2025-06-19 10:55:33.187343 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:33.187353 | orchestrator | 2025-06-19 10:55:33.187364 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-19 10:55:33.187374 | orchestrator | Thursday 19 June 2025 10:55:30 +0000 (0:00:01.637) 0:00:22.959 ********* 2025-06-19 10:55:33.187385 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:33.187395 | orchestrator | 2025-06-19 10:55:33.187406 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-19 10:55:33.187416 | orchestrator | Thursday 19 June 2025 10:55:30 +0000 (0:00:00.273) 0:00:23.232 ********* 2025-06-19 10:55:33.187427 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:33.187437 | orchestrator | 2025-06-19 10:55:33.187448 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:33.187458 | orchestrator | Thursday 19 June 2025 10:55:30 +0000 (0:00:00.256) 0:00:23.488 ********* 2025-06-19 10:55:33.187469 | orchestrator | 2025-06-19 10:55:33.187496 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:33.187507 | orchestrator | Thursday 19 June 2025 10:55:30 +0000 (0:00:00.070) 0:00:23.559 ********* 2025-06-19 10:55:33.187518 | orchestrator | 2025-06-19 10:55:33.187529 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-19 10:55:33.187539 | orchestrator | Thursday 19 June 2025 10:55:30 +0000 (0:00:00.069) 0:00:23.628 ********* 2025-06-19 10:55:33.187550 | orchestrator | 2025-06-19 10:55:33.187561 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-19 10:55:33.187571 | orchestrator | Thursday 19 June 2025 10:55:31 +0000 (0:00:00.074) 0:00:23.703 ********* 2025-06-19 10:55:33.187581 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-19 10:55:33.187592 | orchestrator | 2025-06-19 10:55:33.187602 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-19 10:55:33.187613 | orchestrator | Thursday 19 June 2025 10:55:32 +0000 (0:00:01.292) 0:00:24.996 ********* 2025-06-19 10:55:33.187623 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-19 10:55:33.187634 | orchestrator |  "msg": [ 2025-06-19 10:55:33.187645 | orchestrator |  "Validator run completed.", 2025-06-19 10:55:33.187656 | orchestrator |  "You can find the report file here:", 2025-06-19 10:55:33.187667 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-19T10:55:08+00:00-report.json", 2025-06-19 10:55:33.187678 | orchestrator |  "on the following host:", 2025-06-19 10:55:33.187688 | orchestrator |  "testbed-manager" 2025-06-19 10:55:33.187699 | orchestrator |  ] 2025-06-19 10:55:33.187711 | orchestrator | } 2025-06-19 10:55:33.187721 | orchestrator | 2025-06-19 10:55:33.187732 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:55:33.187743 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-19 10:55:33.187755 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-19 10:55:33.187766 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-19 10:55:33.187776 | orchestrator | 2025-06-19 10:55:33.187787 | orchestrator | 2025-06-19 10:55:33.187797 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:55:33.187808 | orchestrator | Thursday 19 June 2025 10:55:32 +0000 (0:00:00.540) 0:00:25.536 ********* 2025-06-19 10:55:33.187819 | orchestrator | =============================================================================== 2025-06-19 10:55:33.187835 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.61s 2025-06-19 10:55:33.187846 | orchestrator | Aggregate test results step one ----------------------------------------- 1.64s 2025-06-19 10:55:33.187856 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.58s 2025-06-19 10:55:33.187867 | orchestrator | Write report file ------------------------------------------------------- 1.29s 2025-06-19 10:55:33.187878 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2025-06-19 10:55:33.187888 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.72s 2025-06-19 10:55:33.187899 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.69s 2025-06-19 10:55:33.187909 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-06-19 10:55:33.187920 | orchestrator | Aggregate test results step one ----------------------------------------- 0.63s 2025-06-19 10:55:33.187931 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.62s 2025-06-19 10:55:33.187941 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.55s 2025-06-19 10:55:33.187951 | orchestrator | Print report file information ------------------------------------------- 0.54s 2025-06-19 10:55:33.187962 | orchestrator | Prepare test data ------------------------------------------------------- 0.52s 2025-06-19 10:55:33.187973 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.52s 2025-06-19 10:55:33.187983 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.50s 2025-06-19 10:55:33.188031 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.48s 2025-06-19 10:55:33.188042 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.47s 2025-06-19 10:55:33.188053 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-06-19 10:55:33.188063 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.45s 2025-06-19 10:55:33.188074 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.41s 2025-06-19 10:55:33.426531 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-19 10:55:33.437010 | orchestrator | + set -e 2025-06-19 10:55:33.437056 | orchestrator | + source /opt/manager-vars.sh 2025-06-19 10:55:33.437520 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-19 10:55:33.437540 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-19 10:55:33.437551 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-19 10:55:33.437562 | orchestrator | ++ CEPH_VERSION=reef 2025-06-19 10:55:33.437572 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-19 10:55:33.437584 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-19 10:55:33.437595 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-19 10:55:33.437605 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-19 10:55:33.437616 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-19 10:55:33.437626 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-19 10:55:33.437637 | orchestrator | ++ export ARA=false 2025-06-19 10:55:33.437647 | orchestrator | ++ ARA=false 2025-06-19 10:55:33.437658 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-19 10:55:33.437668 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-19 10:55:33.437679 | orchestrator | ++ export TEMPEST=false 2025-06-19 10:55:33.437689 | orchestrator | ++ TEMPEST=false 2025-06-19 10:55:33.437699 | orchestrator | ++ export IS_ZUUL=true 2025-06-19 10:55:33.437710 | orchestrator | ++ IS_ZUUL=true 2025-06-19 10:55:33.437720 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 10:55:33.437731 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.19 2025-06-19 10:55:33.437741 | orchestrator | ++ export EXTERNAL_API=false 2025-06-19 10:55:33.437751 | orchestrator | ++ EXTERNAL_API=false 2025-06-19 10:55:33.437762 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-19 10:55:33.437772 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-19 10:55:33.437782 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-19 10:55:33.437793 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-19 10:55:33.437803 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-19 10:55:33.437814 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-19 10:55:33.437824 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-19 10:55:33.437859 | orchestrator | + source /etc/os-release 2025-06-19 10:55:33.437870 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-19 10:55:33.437891 | orchestrator | ++ NAME=Ubuntu 2025-06-19 10:55:33.437902 | orchestrator | ++ VERSION_ID=24.04 2025-06-19 10:55:33.437913 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-19 10:55:33.437923 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-19 10:55:33.437933 | orchestrator | ++ ID=ubuntu 2025-06-19 10:55:33.437944 | orchestrator | ++ ID_LIKE=debian 2025-06-19 10:55:33.437954 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-19 10:55:33.437965 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-19 10:55:33.437975 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-19 10:55:33.437986 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-19 10:55:33.437997 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-19 10:55:33.438008 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-19 10:55:33.438068 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-19 10:55:33.438082 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-19 10:55:33.438095 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-19 10:55:33.452827 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-19 10:55:55.766240 | orchestrator | 2025-06-19 10:55:55.766374 | orchestrator | # Status of Elasticsearch 2025-06-19 10:55:55.766391 | orchestrator | 2025-06-19 10:55:55.766404 | orchestrator | + pushd /opt/configuration/contrib 2025-06-19 10:55:55.766416 | orchestrator | + echo 2025-06-19 10:55:55.766427 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-19 10:55:55.766439 | orchestrator | + echo 2025-06-19 10:55:55.766450 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-19 10:55:55.960534 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-19 10:55:55.960637 | orchestrator | 2025-06-19 10:55:55.960654 | orchestrator | # Status of MariaDB 2025-06-19 10:55:55.960695 | orchestrator | 2025-06-19 10:55:55.960707 | orchestrator | + echo 2025-06-19 10:55:55.960719 | orchestrator | + echo '# Status of MariaDB' 2025-06-19 10:55:55.960730 | orchestrator | + echo 2025-06-19 10:55:55.960740 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-19 10:55:55.960753 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-19 10:55:56.025692 | orchestrator | Reading package lists... 2025-06-19 10:55:56.336610 | orchestrator | Building dependency tree... 2025-06-19 10:55:56.336968 | orchestrator | Reading state information... 2025-06-19 10:55:56.704682 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-19 10:55:56.704768 | orchestrator | bc set to manually installed. 2025-06-19 10:55:56.704779 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded. 2025-06-19 10:55:57.412534 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-19 10:55:57.412636 | orchestrator | 2025-06-19 10:55:57.412652 | orchestrator | # Status of Prometheus 2025-06-19 10:55:57.412664 | orchestrator | 2025-06-19 10:55:57.412676 | orchestrator | + echo 2025-06-19 10:55:57.412687 | orchestrator | + echo '# Status of Prometheus' 2025-06-19 10:55:57.412706 | orchestrator | + echo 2025-06-19 10:55:57.412724 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-19 10:55:57.478163 | orchestrator | Unauthorized 2025-06-19 10:55:57.481373 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-19 10:55:57.541000 | orchestrator | Unauthorized 2025-06-19 10:55:57.544317 | orchestrator | 2025-06-19 10:55:57.544356 | orchestrator | # Status of RabbitMQ 2025-06-19 10:55:57.544375 | orchestrator | 2025-06-19 10:55:57.544393 | orchestrator | + echo 2025-06-19 10:55:57.544410 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-19 10:55:57.544428 | orchestrator | + echo 2025-06-19 10:55:57.544446 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-19 10:55:57.971762 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-19 10:55:57.980667 | orchestrator | 2025-06-19 10:55:57.980706 | orchestrator | # Status of Redis 2025-06-19 10:55:57.980750 | orchestrator | 2025-06-19 10:55:57.980763 | orchestrator | + echo 2025-06-19 10:55:57.980774 | orchestrator | + echo '# Status of Redis' 2025-06-19 10:55:57.980785 | orchestrator | + echo 2025-06-19 10:55:57.980797 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-19 10:55:57.985352 | orchestrator | TCP OK - 0.001 second response time on 192.168.16.10 port 6379|time=0.001482s;;;0.000000;10.000000 2025-06-19 10:55:57.985392 | orchestrator | 2025-06-19 10:55:57.985403 | orchestrator | # Create backup of MariaDB database 2025-06-19 10:55:57.985415 | orchestrator | 2025-06-19 10:55:57.985426 | orchestrator | + popd 2025-06-19 10:55:57.985437 | orchestrator | + echo 2025-06-19 10:55:57.985448 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-19 10:55:57.985474 | orchestrator | + echo 2025-06-19 10:55:57.985486 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-19 10:55:59.712314 | orchestrator | 2025-06-19 10:55:59 | INFO  | Task 7fe81adb-ff03-4a08-9ea1-1abe72ec8a59 (mariadb_backup) was prepared for execution. 2025-06-19 10:55:59.712416 | orchestrator | 2025-06-19 10:55:59 | INFO  | It takes a moment until task 7fe81adb-ff03-4a08-9ea1-1abe72ec8a59 (mariadb_backup) has been started and output is visible here. 2025-06-19 10:57:47.375694 | orchestrator | 2025-06-19 10:57:47.375813 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-19 10:57:47.375831 | orchestrator | 2025-06-19 10:57:47.375843 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-19 10:57:47.375855 | orchestrator | Thursday 19 June 2025 10:56:03 +0000 (0:00:00.179) 0:00:00.179 ********* 2025-06-19 10:57:47.375866 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:57:47.375878 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:57:47.375889 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:57:47.375900 | orchestrator | 2025-06-19 10:57:47.375911 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-19 10:57:47.375984 | orchestrator | Thursday 19 June 2025 10:56:03 +0000 (0:00:00.328) 0:00:00.507 ********* 2025-06-19 10:57:47.375996 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-19 10:57:47.376007 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-19 10:57:47.376018 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-19 10:57:47.376029 | orchestrator | 2025-06-19 10:57:47.376039 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-19 10:57:47.376050 | orchestrator | 2025-06-19 10:57:47.376061 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-19 10:57:47.376072 | orchestrator | Thursday 19 June 2025 10:56:04 +0000 (0:00:00.540) 0:00:01.048 ********* 2025-06-19 10:57:47.376083 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-19 10:57:47.376094 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-19 10:57:47.376104 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-19 10:57:47.376115 | orchestrator | 2025-06-19 10:57:47.376126 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-19 10:57:47.376136 | orchestrator | Thursday 19 June 2025 10:56:04 +0000 (0:00:00.390) 0:00:01.439 ********* 2025-06-19 10:57:47.376162 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-19 10:57:47.376174 | orchestrator | 2025-06-19 10:57:47.376196 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-19 10:57:47.376207 | orchestrator | Thursday 19 June 2025 10:56:05 +0000 (0:00:00.535) 0:00:01.975 ********* 2025-06-19 10:57:47.376218 | orchestrator | ok: [testbed-node-1] 2025-06-19 10:57:47.376228 | orchestrator | ok: [testbed-node-0] 2025-06-19 10:57:47.376239 | orchestrator | ok: [testbed-node-2] 2025-06-19 10:57:47.376250 | orchestrator | 2025-06-19 10:57:47.376277 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-19 10:57:47.376288 | orchestrator | Thursday 19 June 2025 10:56:08 +0000 (0:00:03.272) 0:00:05.248 ********* 2025-06-19 10:57:47.376320 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-19 10:57:47.376332 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-19 10:57:47.376343 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-19 10:57:47.376354 | orchestrator | mariadb_bootstrap_restart 2025-06-19 10:57:47.376364 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:57:47.376375 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:57:47.376385 | orchestrator | changed: [testbed-node-0] 2025-06-19 10:57:47.376396 | orchestrator | 2025-06-19 10:57:47.376406 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-19 10:57:47.376417 | orchestrator | skipping: no hosts matched 2025-06-19 10:57:47.376428 | orchestrator | 2025-06-19 10:57:47.376438 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-19 10:57:47.376449 | orchestrator | skipping: no hosts matched 2025-06-19 10:57:47.376459 | orchestrator | 2025-06-19 10:57:47.376470 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-19 10:57:47.376480 | orchestrator | skipping: no hosts matched 2025-06-19 10:57:47.376491 | orchestrator | 2025-06-19 10:57:47.376501 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-19 10:57:47.376512 | orchestrator | 2025-06-19 10:57:47.376522 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-19 10:57:47.376533 | orchestrator | Thursday 19 June 2025 10:57:46 +0000 (0:01:37.761) 0:01:43.010 ********* 2025-06-19 10:57:47.376543 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:57:47.376554 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:57:47.376564 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:57:47.376574 | orchestrator | 2025-06-19 10:57:47.376585 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-19 10:57:47.376596 | orchestrator | Thursday 19 June 2025 10:57:46 +0000 (0:00:00.309) 0:01:43.320 ********* 2025-06-19 10:57:47.376606 | orchestrator | skipping: [testbed-node-0] 2025-06-19 10:57:47.376617 | orchestrator | skipping: [testbed-node-1] 2025-06-19 10:57:47.376627 | orchestrator | skipping: [testbed-node-2] 2025-06-19 10:57:47.376638 | orchestrator | 2025-06-19 10:57:47.376648 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 10:57:47.376660 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-19 10:57:47.376671 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-19 10:57:47.376682 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-19 10:57:47.376693 | orchestrator | 2025-06-19 10:57:47.376703 | orchestrator | 2025-06-19 10:57:47.376714 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 10:57:47.376724 | orchestrator | Thursday 19 June 2025 10:57:47 +0000 (0:00:00.391) 0:01:43.711 ********* 2025-06-19 10:57:47.376734 | orchestrator | =============================================================================== 2025-06-19 10:57:47.376745 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 97.76s 2025-06-19 10:57:47.376774 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.27s 2025-06-19 10:57:47.376786 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-06-19 10:57:47.376796 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2025-06-19 10:57:47.376807 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.39s 2025-06-19 10:57:47.376817 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-06-19 10:57:47.376828 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-19 10:57:47.376846 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-06-19 10:57:47.595576 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-19 10:57:47.604340 | orchestrator | + set -e 2025-06-19 10:57:47.604375 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-19 10:57:47.604389 | orchestrator | ++ export INTERACTIVE=false 2025-06-19 10:57:47.604400 | orchestrator | ++ INTERACTIVE=false 2025-06-19 10:57:47.604411 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-19 10:57:47.604421 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-19 10:57:47.604432 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-19 10:57:47.605580 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-19 10:57:47.611800 | orchestrator | 2025-06-19 10:57:47.611832 | orchestrator | # OpenStack endpoints 2025-06-19 10:57:47.611844 | orchestrator | 2025-06-19 10:57:47.611855 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-19 10:57:47.611866 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-19 10:57:47.611877 | orchestrator | + export OS_CLOUD=admin 2025-06-19 10:57:47.611888 | orchestrator | + OS_CLOUD=admin 2025-06-19 10:57:47.611899 | orchestrator | + echo 2025-06-19 10:57:47.611910 | orchestrator | + echo '# OpenStack endpoints' 2025-06-19 10:57:47.611967 | orchestrator | + echo 2025-06-19 10:57:47.611980 | orchestrator | + openstack endpoint list 2025-06-19 10:57:51.077190 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-19 10:57:51.077297 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-19 10:57:51.077312 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-19 10:57:51.077344 | orchestrator | | 06b16c96f3c141b89d0ffe3c05456cfa | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-19 10:57:51.077360 | orchestrator | | 25d42db290204eaaaadd8b595971787b | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-19 10:57:51.077371 | orchestrator | | 3b2693533bac4efb8e8c228edb95f3a1 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-19 10:57:51.077381 | orchestrator | | 3bc7ae7d53554a5aa625453ab685d14a | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-19 10:57:51.077392 | orchestrator | | 415ff0225fb04c74802c501ff1660273 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-19 10:57:51.077403 | orchestrator | | 46adb67b30ae412da8c72ca7f2241c8f | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-19 10:57:51.077413 | orchestrator | | 4c6c857dc9474b86805ae814c8bceb2d | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-19 10:57:51.077424 | orchestrator | | 64f0510fde81482093aab5a62127d564 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-19 10:57:51.077434 | orchestrator | | 75b7726569234a61a028bb8029a41cee | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-19 10:57:51.077445 | orchestrator | | 7c8d25ccf31c4a0d9fae6ea5e08e260e | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-19 10:57:51.077455 | orchestrator | | 805669ca8c7a4628a7cc840f5efe147d | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-19 10:57:51.077488 | orchestrator | | 9540e4384c2245758b664af689ac262f | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-19 10:57:51.077499 | orchestrator | | a7739d73e688416e9a78e14f708702de | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-19 10:57:51.077510 | orchestrator | | ae1711d7c7d246e29a3afaf18d49fa1d | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-19 10:57:51.077520 | orchestrator | | b67993ce6f604c96952a8a1894f622ea | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-19 10:57:51.077531 | orchestrator | | b8dcdccb4694420cb86f2764e8749bfa | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-19 10:57:51.077541 | orchestrator | | ca0b070c61704fc3a099dec814de887f | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-19 10:57:51.077552 | orchestrator | | cc8ab50ec0ec4f739667313e419466fa | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-19 10:57:51.077562 | orchestrator | | ccfec2a0d0d64d11ae2f30923690777b | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-19 10:57:51.077573 | orchestrator | | d34594e3a72b4ca0a5c0b732416d4515 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-19 10:57:51.077601 | orchestrator | | d9c60e2bca5b407e895ee9f3c6afefde | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-19 10:57:51.077613 | orchestrator | | e343abd106b246c89a80732f9196a182 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-19 10:57:51.077623 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-19 10:57:51.318406 | orchestrator | 2025-06-19 10:57:51.318476 | orchestrator | # Cinder 2025-06-19 10:57:51.318481 | orchestrator | 2025-06-19 10:57:51.318486 | orchestrator | + echo 2025-06-19 10:57:51.318490 | orchestrator | + echo '# Cinder' 2025-06-19 10:57:51.318494 | orchestrator | + echo 2025-06-19 10:57:51.318498 | orchestrator | + openstack volume service list 2025-06-19 10:57:53.945836 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-19 10:57:53.945988 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-19 10:57:53.946005 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-19 10:57:53.946064 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-19T10:57:44.000000 | 2025-06-19 10:57:53.946076 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-19T10:57:45.000000 | 2025-06-19 10:57:53.946087 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-19T10:57:46.000000 | 2025-06-19 10:57:53.946098 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-19T10:57:48.000000 | 2025-06-19 10:57:53.946109 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-19T10:57:48.000000 | 2025-06-19 10:57:53.946120 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-19T10:57:48.000000 | 2025-06-19 10:57:53.946131 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-19T10:57:50.000000 | 2025-06-19 10:57:53.946169 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-19T10:57:51.000000 | 2025-06-19 10:57:53.946180 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-19T10:57:51.000000 | 2025-06-19 10:57:53.946191 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-19 10:57:54.190332 | orchestrator | 2025-06-19 10:57:54.190427 | orchestrator | # Neutron 2025-06-19 10:57:54.190443 | orchestrator | 2025-06-19 10:57:54.190455 | orchestrator | + echo 2025-06-19 10:57:54.190466 | orchestrator | + echo '# Neutron' 2025-06-19 10:57:54.190478 | orchestrator | + echo 2025-06-19 10:57:54.190489 | orchestrator | + openstack network agent list 2025-06-19 10:57:57.009680 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-19 10:57:57.009782 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-19 10:57:57.009797 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-19 10:57:57.009828 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-19 10:57:57.009840 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-19 10:57:57.009851 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-19 10:57:57.009861 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-19 10:57:57.009872 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-19 10:57:57.009883 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-19 10:57:57.009893 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-19 10:57:57.009904 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-19 10:57:57.009968 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-19 10:57:57.009979 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-19 10:57:57.255124 | orchestrator | + openstack network service provider list 2025-06-19 10:57:59.820548 | orchestrator | +---------------+------+---------+ 2025-06-19 10:57:59.820647 | orchestrator | | Service Type | Name | Default | 2025-06-19 10:57:59.820661 | orchestrator | +---------------+------+---------+ 2025-06-19 10:57:59.820673 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-19 10:57:59.820683 | orchestrator | +---------------+------+---------+ 2025-06-19 10:58:00.075804 | orchestrator | 2025-06-19 10:58:00.075900 | orchestrator | # Nova 2025-06-19 10:58:00.075966 | orchestrator | 2025-06-19 10:58:00.075977 | orchestrator | + echo 2025-06-19 10:58:00.075988 | orchestrator | + echo '# Nova' 2025-06-19 10:58:00.075999 | orchestrator | + echo 2025-06-19 10:58:00.076011 | orchestrator | + openstack compute service list 2025-06-19 10:58:02.724407 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-19 10:58:02.724517 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-19 10:58:02.724557 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-19 10:58:02.724584 | orchestrator | | db546ee1-46a6-4257-9779-53719b26d5a1 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-19T10:57:57.000000 | 2025-06-19 10:58:02.724596 | orchestrator | | e4a108a4-b303-4a9c-9268-ea8e6f3300f3 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-19T10:57:57.000000 | 2025-06-19 10:58:02.724607 | orchestrator | | 453dfec2-a7d6-4c67-aece-1d752e199562 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-19T10:57:56.000000 | 2025-06-19 10:58:02.724618 | orchestrator | | bbbf3450-99c4-4237-8cca-e8194c900e88 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-19T10:57:57.000000 | 2025-06-19 10:58:02.724629 | orchestrator | | e4a39499-2465-4bf8-95f0-f9e0f27ab0ad | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-19T10:58:01.000000 | 2025-06-19 10:58:02.724639 | orchestrator | | 4a016fef-9d2b-4009-a0cf-449e22872df6 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-19T10:57:54.000000 | 2025-06-19 10:58:02.724650 | orchestrator | | 254efde6-fceb-44db-92e3-567981005c00 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-19T10:57:59.000000 | 2025-06-19 10:58:02.724661 | orchestrator | | 6bc84502-e3f1-47d9-8e66-dd9c1099a7c6 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-19T10:57:59.000000 | 2025-06-19 10:58:02.724672 | orchestrator | | 3d495dc0-27b1-42b6-bd01-91ed6b2b8516 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-19T10:58:00.000000 | 2025-06-19 10:58:02.724683 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-19 10:58:02.981814 | orchestrator | + openstack hypervisor list 2025-06-19 10:58:07.872528 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-19 10:58:07.872627 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-19 10:58:07.872640 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-19 10:58:07.872651 | orchestrator | | 4862b8f6-d811-4ab8-854b-f234f7432879 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-19 10:58:07.872662 | orchestrator | | 1c141975-28c9-46e6-9975-e7cdccd4399e | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-19 10:58:07.872673 | orchestrator | | 129666fe-f106-41ea-b651-944daa1c147c | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-19 10:58:07.872684 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-19 10:58:08.158776 | orchestrator | 2025-06-19 10:58:08.158871 | orchestrator | # Run OpenStack test play 2025-06-19 10:58:08.158886 | orchestrator | 2025-06-19 10:58:08.158935 | orchestrator | + echo 2025-06-19 10:58:08.158948 | orchestrator | + echo '# Run OpenStack test play' 2025-06-19 10:58:08.158960 | orchestrator | + echo 2025-06-19 10:58:08.158971 | orchestrator | + osism apply --environment openstack test 2025-06-19 10:58:09.794275 | orchestrator | 2025-06-19 10:58:09 | INFO  | Trying to run play test in environment openstack 2025-06-19 10:58:09.798689 | orchestrator | Registering Redlock._acquired_script 2025-06-19 10:58:09.798732 | orchestrator | Registering Redlock._extend_script 2025-06-19 10:58:09.798745 | orchestrator | Registering Redlock._release_script 2025-06-19 10:58:09.856799 | orchestrator | 2025-06-19 10:58:09 | INFO  | Task d53884e1-155a-49d1-962e-7d8c4fbc4a8e (test) was prepared for execution. 2025-06-19 10:58:09.856873 | orchestrator | 2025-06-19 10:58:09 | INFO  | It takes a moment until task d53884e1-155a-49d1-962e-7d8c4fbc4a8e (test) has been started and output is visible here. 2025-06-19 11:04:11.150873 | orchestrator | 2025-06-19 11:04:11.151010 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-19 11:04:11.151029 | orchestrator | 2025-06-19 11:04:11.151042 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-19 11:04:11.151075 | orchestrator | Thursday 19 June 2025 10:58:13 +0000 (0:00:00.077) 0:00:00.077 ********* 2025-06-19 11:04:11.151087 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151099 | orchestrator | 2025-06-19 11:04:11.151110 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-19 11:04:11.151120 | orchestrator | Thursday 19 June 2025 10:58:17 +0000 (0:00:03.669) 0:00:03.746 ********* 2025-06-19 11:04:11.151131 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151148 | orchestrator | 2025-06-19 11:04:11.151167 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-19 11:04:11.151188 | orchestrator | Thursday 19 June 2025 10:58:21 +0000 (0:00:04.030) 0:00:07.777 ********* 2025-06-19 11:04:11.151208 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151240 | orchestrator | 2025-06-19 11:04:11.151263 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-19 11:04:11.151274 | orchestrator | Thursday 19 June 2025 10:58:27 +0000 (0:00:05.885) 0:00:13.662 ********* 2025-06-19 11:04:11.151285 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151295 | orchestrator | 2025-06-19 11:04:11.151306 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-19 11:04:11.151317 | orchestrator | Thursday 19 June 2025 10:58:30 +0000 (0:00:03.666) 0:00:17.328 ********* 2025-06-19 11:04:11.151327 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151338 | orchestrator | 2025-06-19 11:04:11.151348 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-19 11:04:11.151359 | orchestrator | Thursday 19 June 2025 10:58:34 +0000 (0:00:03.963) 0:00:21.291 ********* 2025-06-19 11:04:11.151370 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-19 11:04:11.151381 | orchestrator | changed: [localhost] => (item=member) 2025-06-19 11:04:11.151392 | orchestrator | changed: [localhost] => (item=creator) 2025-06-19 11:04:11.151403 | orchestrator | 2025-06-19 11:04:11.151426 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-19 11:04:11.151439 | orchestrator | Thursday 19 June 2025 10:58:46 +0000 (0:00:11.808) 0:00:33.100 ********* 2025-06-19 11:04:11.151451 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151463 | orchestrator | 2025-06-19 11:04:11.151475 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-19 11:04:11.151487 | orchestrator | Thursday 19 June 2025 10:58:50 +0000 (0:00:04.163) 0:00:37.263 ********* 2025-06-19 11:04:11.151499 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151512 | orchestrator | 2025-06-19 11:04:11.151524 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-19 11:04:11.151541 | orchestrator | Thursday 19 June 2025 10:58:55 +0000 (0:00:04.929) 0:00:42.193 ********* 2025-06-19 11:04:11.151560 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151609 | orchestrator | 2025-06-19 11:04:11.151626 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-19 11:04:11.151639 | orchestrator | Thursday 19 June 2025 10:58:59 +0000 (0:00:04.105) 0:00:46.299 ********* 2025-06-19 11:04:11.151651 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151664 | orchestrator | 2025-06-19 11:04:11.151676 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-19 11:04:11.151689 | orchestrator | Thursday 19 June 2025 10:59:03 +0000 (0:00:03.756) 0:00:50.055 ********* 2025-06-19 11:04:11.151701 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151713 | orchestrator | 2025-06-19 11:04:11.151726 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-19 11:04:11.151737 | orchestrator | Thursday 19 June 2025 10:59:07 +0000 (0:00:03.927) 0:00:53.983 ********* 2025-06-19 11:04:11.151748 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151758 | orchestrator | 2025-06-19 11:04:11.151768 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-19 11:04:11.151778 | orchestrator | Thursday 19 June 2025 10:59:11 +0000 (0:00:03.822) 0:00:57.806 ********* 2025-06-19 11:04:11.151789 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.151809 | orchestrator | 2025-06-19 11:04:11.151819 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-19 11:04:11.151830 | orchestrator | Thursday 19 June 2025 10:59:26 +0000 (0:00:15.060) 0:01:12.867 ********* 2025-06-19 11:04:11.151840 | orchestrator | changed: [localhost] => (item=test) 2025-06-19 11:04:11.151850 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-19 11:04:11.151861 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-19 11:04:11.151872 | orchestrator | 2025-06-19 11:04:11.151882 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-19 11:04:11.151896 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-19 11:04:11.151907 | orchestrator | 2025-06-19 11:04:11.151918 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-19 11:04:11.151928 | orchestrator | 2025-06-19 11:04:11.151939 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-19 11:04:11.151949 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-19 11:04:11.151960 | orchestrator | 2025-06-19 11:04:11.151971 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-19 11:04:11.151981 | orchestrator | Thursday 19 June 2025 11:02:50 +0000 (0:03:23.732) 0:04:36.599 ********* 2025-06-19 11:04:11.151995 | orchestrator | changed: [localhost] => (item=test) 2025-06-19 11:04:11.152014 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-19 11:04:11.152033 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-19 11:04:11.152045 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-19 11:04:11.152055 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-19 11:04:11.152066 | orchestrator | 2025-06-19 11:04:11.152076 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-19 11:04:11.152087 | orchestrator | Thursday 19 June 2025 11:03:13 +0000 (0:00:23.172) 0:04:59.772 ********* 2025-06-19 11:04:11.152097 | orchestrator | changed: [localhost] => (item=test) 2025-06-19 11:04:11.152108 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-19 11:04:11.152138 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-19 11:04:11.152149 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-19 11:04:11.152160 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-19 11:04:11.152170 | orchestrator | 2025-06-19 11:04:11.152181 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-19 11:04:11.152191 | orchestrator | Thursday 19 June 2025 11:03:45 +0000 (0:00:32.168) 0:05:31.940 ********* 2025-06-19 11:04:11.152202 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.152212 | orchestrator | 2025-06-19 11:04:11.152282 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-19 11:04:11.152293 | orchestrator | Thursday 19 June 2025 11:03:52 +0000 (0:00:06.846) 0:05:38.787 ********* 2025-06-19 11:04:11.152304 | orchestrator | changed: [localhost] 2025-06-19 11:04:11.152314 | orchestrator | 2025-06-19 11:04:11.152325 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-19 11:04:11.152336 | orchestrator | Thursday 19 June 2025 11:04:05 +0000 (0:00:13.332) 0:05:52.119 ********* 2025-06-19 11:04:11.152347 | orchestrator | ok: [localhost] 2025-06-19 11:04:11.152358 | orchestrator | 2025-06-19 11:04:11.152368 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-19 11:04:11.152379 | orchestrator | Thursday 19 June 2025 11:04:10 +0000 (0:00:05.080) 0:05:57.199 ********* 2025-06-19 11:04:11.152389 | orchestrator | ok: [localhost] => { 2025-06-19 11:04:11.152400 | orchestrator |  "msg": "192.168.112.184" 2025-06-19 11:04:11.152411 | orchestrator | } 2025-06-19 11:04:11.152430 | orchestrator | 2025-06-19 11:04:11.152451 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-19 11:04:11.152470 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-19 11:04:11.152482 | orchestrator | 2025-06-19 11:04:11.152493 | orchestrator | 2025-06-19 11:04:11.152504 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-19 11:04:11.152523 | orchestrator | Thursday 19 June 2025 11:04:10 +0000 (0:00:00.040) 0:05:57.239 ********* 2025-06-19 11:04:11.152541 | orchestrator | =============================================================================== 2025-06-19 11:04:11.152552 | orchestrator | Create test instances ------------------------------------------------- 203.73s 2025-06-19 11:04:11.152563 | orchestrator | Add tag to instances --------------------------------------------------- 32.17s 2025-06-19 11:04:11.152603 | orchestrator | Add metadata to instances ---------------------------------------------- 23.17s 2025-06-19 11:04:11.152622 | orchestrator | Create test network topology ------------------------------------------- 15.06s 2025-06-19 11:04:11.152641 | orchestrator | Attach test volume ----------------------------------------------------- 13.33s 2025-06-19 11:04:11.152659 | orchestrator | Add member roles to user test ------------------------------------------ 11.81s 2025-06-19 11:04:11.152671 | orchestrator | Create test volume ------------------------------------------------------ 6.85s 2025-06-19 11:04:11.152681 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.89s 2025-06-19 11:04:11.152692 | orchestrator | Create floating ip address ---------------------------------------------- 5.08s 2025-06-19 11:04:11.152702 | orchestrator | Create ssh security group ----------------------------------------------- 4.93s 2025-06-19 11:04:11.152713 | orchestrator | Create test server group ------------------------------------------------ 4.16s 2025-06-19 11:04:11.152723 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.11s 2025-06-19 11:04:11.152734 | orchestrator | Create test-admin user -------------------------------------------------- 4.03s 2025-06-19 11:04:11.152744 | orchestrator | Create test user -------------------------------------------------------- 3.96s 2025-06-19 11:04:11.152759 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.93s 2025-06-19 11:04:11.152777 | orchestrator | Create test keypair ----------------------------------------------------- 3.82s 2025-06-19 11:04:11.152797 | orchestrator | Create icmp security group ---------------------------------------------- 3.76s 2025-06-19 11:04:11.152810 | orchestrator | Create test domain ------------------------------------------------------ 3.67s 2025-06-19 11:04:11.152821 | orchestrator | Create test project ----------------------------------------------------- 3.67s 2025-06-19 11:04:11.152832 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-06-19 11:04:11.415456 | orchestrator | + server_list 2025-06-19 11:04:11.415554 | orchestrator | + openstack --os-cloud test server list 2025-06-19 11:04:15.084361 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-19 11:04:15.084463 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-19 11:04:15.084477 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-19 11:04:15.084488 | orchestrator | | 3617e52f-7bde-445d-932d-7a2c661ba8da | test-4 | ACTIVE | auto_allocated_network=10.42.0.31, 192.168.112.175 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-19 11:04:15.084499 | orchestrator | | 3b092941-fd16-4d00-aa15-e1e4c4093c92 | test-3 | ACTIVE | auto_allocated_network=10.42.0.45, 192.168.112.122 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-19 11:04:15.084510 | orchestrator | | ecb71e93-a792-43be-a498-257ed194d9a5 | test-2 | ACTIVE | auto_allocated_network=10.42.0.61, 192.168.112.138 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-19 11:04:15.084521 | orchestrator | | d3247e59-efb8-449f-8e62-aa80c907c108 | test-1 | ACTIVE | auto_allocated_network=10.42.0.27, 192.168.112.112 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-19 11:04:15.084531 | orchestrator | | 5c3f165d-a039-4576-9a60-63ab66a22679 | test | ACTIVE | auto_allocated_network=10.42.0.54, 192.168.112.184 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-19 11:04:15.084542 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-19 11:04:15.389251 | orchestrator | + openstack --os-cloud test server show test 2025-06-19 11:04:18.955713 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:18.955826 | orchestrator | | Field | Value | 2025-06-19 11:04:18.955852 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:18.955882 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-19 11:04:18.955903 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-19 11:04:18.955922 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-19 11:04:18.955939 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-19 11:04:18.955956 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-19 11:04:18.955974 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-19 11:04:18.955992 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-19 11:04:18.956012 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-19 11:04:18.956074 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-19 11:04:18.956088 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-19 11:04:18.956098 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-19 11:04:18.956123 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-19 11:04:18.956134 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-19 11:04:18.956145 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-19 11:04:18.956156 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-19 11:04:18.956167 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-19T10:59:56.000000 | 2025-06-19 11:04:18.956179 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-19 11:04:18.956192 | orchestrator | | accessIPv4 | | 2025-06-19 11:04:18.956204 | orchestrator | | accessIPv6 | | 2025-06-19 11:04:18.956224 | orchestrator | | addresses | auto_allocated_network=10.42.0.54, 192.168.112.184 | 2025-06-19 11:04:18.956246 | orchestrator | | config_drive | | 2025-06-19 11:04:18.956258 | orchestrator | | created | 2025-06-19T10:59:34Z | 2025-06-19 11:04:18.956270 | orchestrator | | description | None | 2025-06-19 11:04:18.956287 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-19 11:04:18.956300 | orchestrator | | hostId | 3588656399437826fbee04d442788193a5daaff50651d530a5ed8e4d | 2025-06-19 11:04:18.956313 | orchestrator | | host_status | None | 2025-06-19 11:04:18.956325 | orchestrator | | id | 5c3f165d-a039-4576-9a60-63ab66a22679 | 2025-06-19 11:04:18.956338 | orchestrator | | image | Cirros 0.6.2 (1536ef1a-b94d-4b4a-aaaf-036333c512dd) | 2025-06-19 11:04:18.956350 | orchestrator | | key_name | test | 2025-06-19 11:04:18.956362 | orchestrator | | locked | False | 2025-06-19 11:04:18.956380 | orchestrator | | locked_reason | None | 2025-06-19 11:04:18.956393 | orchestrator | | name | test | 2025-06-19 11:04:18.956413 | orchestrator | | pinned_availability_zone | None | 2025-06-19 11:04:18.956426 | orchestrator | | progress | 0 | 2025-06-19 11:04:18.956439 | orchestrator | | project_id | e8893c1c4aed45f69d43933a71f33e8a | 2025-06-19 11:04:18.956456 | orchestrator | | properties | hostname='test' | 2025-06-19 11:04:18.956469 | orchestrator | | security_groups | name='icmp' | 2025-06-19 11:04:18.956481 | orchestrator | | | name='ssh' | 2025-06-19 11:04:18.956494 | orchestrator | | server_groups | None | 2025-06-19 11:04:18.956507 | orchestrator | | status | ACTIVE | 2025-06-19 11:04:18.956526 | orchestrator | | tags | test | 2025-06-19 11:04:18.956537 | orchestrator | | trusted_image_certificates | None | 2025-06-19 11:04:18.956547 | orchestrator | | updated | 2025-06-19T11:02:55Z | 2025-06-19 11:04:18.956597 | orchestrator | | user_id | 61e7a0efc89042dc8a9816c7cbecc487 | 2025-06-19 11:04:18.956612 | orchestrator | | volumes_attached | delete_on_termination='False', id='52306345-2dff-40f9-9854-a67201ea9455' | 2025-06-19 11:04:18.960213 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:19.234856 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-19 11:04:22.367025 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:22.367829 | orchestrator | | Field | Value | 2025-06-19 11:04:22.367863 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:22.367877 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-19 11:04:22.367890 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-19 11:04:22.367922 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-19 11:04:22.367933 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-19 11:04:22.367944 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-19 11:04:22.367955 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-19 11:04:22.367966 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-19 11:04:22.367976 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-19 11:04:22.368008 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-19 11:04:22.368027 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-19 11:04:22.368038 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-19 11:04:22.368049 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-19 11:04:22.368060 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-19 11:04:22.368078 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-19 11:04:22.368088 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-19 11:04:22.368100 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-19T11:00:41.000000 | 2025-06-19 11:04:22.368111 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-19 11:04:22.368122 | orchestrator | | accessIPv4 | | 2025-06-19 11:04:22.368133 | orchestrator | | accessIPv6 | | 2025-06-19 11:04:22.368144 | orchestrator | | addresses | auto_allocated_network=10.42.0.27, 192.168.112.112 | 2025-06-19 11:04:22.368162 | orchestrator | | config_drive | | 2025-06-19 11:04:22.368181 | orchestrator | | created | 2025-06-19T11:00:17Z | 2025-06-19 11:04:22.368193 | orchestrator | | description | None | 2025-06-19 11:04:22.368210 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-19 11:04:22.368221 | orchestrator | | hostId | ae1922e7b5e2888078aed40eee44d9fc64261abc05e48de05a28e6fc | 2025-06-19 11:04:22.368232 | orchestrator | | host_status | None | 2025-06-19 11:04:22.368243 | orchestrator | | id | d3247e59-efb8-449f-8e62-aa80c907c108 | 2025-06-19 11:04:22.368254 | orchestrator | | image | Cirros 0.6.2 (1536ef1a-b94d-4b4a-aaaf-036333c512dd) | 2025-06-19 11:04:22.368265 | orchestrator | | key_name | test | 2025-06-19 11:04:22.368275 | orchestrator | | locked | False | 2025-06-19 11:04:22.368286 | orchestrator | | locked_reason | None | 2025-06-19 11:04:22.368297 | orchestrator | | name | test-1 | 2025-06-19 11:04:22.368319 | orchestrator | | pinned_availability_zone | None | 2025-06-19 11:04:22.368331 | orchestrator | | progress | 0 | 2025-06-19 11:04:22.368349 | orchestrator | | project_id | e8893c1c4aed45f69d43933a71f33e8a | 2025-06-19 11:04:22.368360 | orchestrator | | properties | hostname='test-1' | 2025-06-19 11:04:22.368371 | orchestrator | | security_groups | name='icmp' | 2025-06-19 11:04:22.368381 | orchestrator | | | name='ssh' | 2025-06-19 11:04:22.368392 | orchestrator | | server_groups | None | 2025-06-19 11:04:22.368403 | orchestrator | | status | ACTIVE | 2025-06-19 11:04:22.368414 | orchestrator | | tags | test | 2025-06-19 11:04:22.368425 | orchestrator | | trusted_image_certificates | None | 2025-06-19 11:04:22.368436 | orchestrator | | updated | 2025-06-19T11:02:59Z | 2025-06-19 11:04:22.368452 | orchestrator | | user_id | 61e7a0efc89042dc8a9816c7cbecc487 | 2025-06-19 11:04:22.368467 | orchestrator | | volumes_attached | | 2025-06-19 11:04:22.370886 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:22.639758 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-19 11:04:25.761237 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:25.761344 | orchestrator | | Field | Value | 2025-06-19 11:04:25.761359 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:25.761370 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-19 11:04:25.761382 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-19 11:04:25.761393 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-19 11:04:25.761404 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-19 11:04:25.761415 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-19 11:04:25.761425 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-19 11:04:25.761454 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-19 11:04:25.761491 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-19 11:04:25.761521 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-19 11:04:25.761533 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-19 11:04:25.761615 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-19 11:04:25.761628 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-19 11:04:25.761639 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-19 11:04:25.761650 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-19 11:04:25.761661 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-19 11:04:25.761672 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-19T11:01:23.000000 | 2025-06-19 11:04:25.761682 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-19 11:04:25.761703 | orchestrator | | accessIPv4 | | 2025-06-19 11:04:25.761720 | orchestrator | | accessIPv6 | | 2025-06-19 11:04:25.761732 | orchestrator | | addresses | auto_allocated_network=10.42.0.61, 192.168.112.138 | 2025-06-19 11:04:25.761751 | orchestrator | | config_drive | | 2025-06-19 11:04:25.761764 | orchestrator | | created | 2025-06-19T11:01:01Z | 2025-06-19 11:04:25.761777 | orchestrator | | description | None | 2025-06-19 11:04:25.761790 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-19 11:04:25.761802 | orchestrator | | hostId | 221e63d3500f0883032b19177dbfc91d1a38c887bb169b2274eba459 | 2025-06-19 11:04:25.761815 | orchestrator | | host_status | None | 2025-06-19 11:04:25.761827 | orchestrator | | id | ecb71e93-a792-43be-a498-257ed194d9a5 | 2025-06-19 11:04:25.761839 | orchestrator | | image | Cirros 0.6.2 (1536ef1a-b94d-4b4a-aaaf-036333c512dd) | 2025-06-19 11:04:25.761865 | orchestrator | | key_name | test | 2025-06-19 11:04:25.761878 | orchestrator | | locked | False | 2025-06-19 11:04:25.761892 | orchestrator | | locked_reason | None | 2025-06-19 11:04:25.761905 | orchestrator | | name | test-2 | 2025-06-19 11:04:25.761924 | orchestrator | | pinned_availability_zone | None | 2025-06-19 11:04:25.761938 | orchestrator | | progress | 0 | 2025-06-19 11:04:25.761950 | orchestrator | | project_id | e8893c1c4aed45f69d43933a71f33e8a | 2025-06-19 11:04:25.761962 | orchestrator | | properties | hostname='test-2' | 2025-06-19 11:04:25.761974 | orchestrator | | security_groups | name='icmp' | 2025-06-19 11:04:25.761987 | orchestrator | | | name='ssh' | 2025-06-19 11:04:25.762000 | orchestrator | | server_groups | None | 2025-06-19 11:04:25.762121 | orchestrator | | status | ACTIVE | 2025-06-19 11:04:25.762140 | orchestrator | | tags | test | 2025-06-19 11:04:25.762156 | orchestrator | | trusted_image_certificates | None | 2025-06-19 11:04:25.762168 | orchestrator | | updated | 2025-06-19T11:03:04Z | 2025-06-19 11:04:25.762185 | orchestrator | | user_id | 61e7a0efc89042dc8a9816c7cbecc487 | 2025-06-19 11:04:25.762197 | orchestrator | | volumes_attached | | 2025-06-19 11:04:25.764293 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:26.102672 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-19 11:04:29.172066 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:29.172184 | orchestrator | | Field | Value | 2025-06-19 11:04:29.172201 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:29.172213 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-19 11:04:29.172250 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-19 11:04:29.172262 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-19 11:04:29.172273 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-19 11:04:29.172299 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-19 11:04:29.172310 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-19 11:04:29.172321 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-19 11:04:29.172332 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-19 11:04:29.172361 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-19 11:04:29.172373 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-19 11:04:29.172384 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-19 11:04:29.172403 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-19 11:04:29.172414 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-19 11:04:29.172425 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-19 11:04:29.172436 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-19 11:04:29.172446 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-19T11:02:01.000000 | 2025-06-19 11:04:29.172463 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-19 11:04:29.172474 | orchestrator | | accessIPv4 | | 2025-06-19 11:04:29.172485 | orchestrator | | accessIPv6 | | 2025-06-19 11:04:29.172496 | orchestrator | | addresses | auto_allocated_network=10.42.0.45, 192.168.112.122 | 2025-06-19 11:04:29.172514 | orchestrator | | config_drive | | 2025-06-19 11:04:29.172525 | orchestrator | | created | 2025-06-19T11:01:44Z | 2025-06-19 11:04:29.172543 | orchestrator | | description | None | 2025-06-19 11:04:29.172630 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-19 11:04:29.172645 | orchestrator | | hostId | 3588656399437826fbee04d442788193a5daaff50651d530a5ed8e4d | 2025-06-19 11:04:29.172656 | orchestrator | | host_status | None | 2025-06-19 11:04:29.172667 | orchestrator | | id | 3b092941-fd16-4d00-aa15-e1e4c4093c92 | 2025-06-19 11:04:29.172684 | orchestrator | | image | Cirros 0.6.2 (1536ef1a-b94d-4b4a-aaaf-036333c512dd) | 2025-06-19 11:04:29.172695 | orchestrator | | key_name | test | 2025-06-19 11:04:29.172706 | orchestrator | | locked | False | 2025-06-19 11:04:29.172717 | orchestrator | | locked_reason | None | 2025-06-19 11:04:29.172728 | orchestrator | | name | test-3 | 2025-06-19 11:04:29.172745 | orchestrator | | pinned_availability_zone | None | 2025-06-19 11:04:29.172764 | orchestrator | | progress | 0 | 2025-06-19 11:04:29.172775 | orchestrator | | project_id | e8893c1c4aed45f69d43933a71f33e8a | 2025-06-19 11:04:29.172786 | orchestrator | | properties | hostname='test-3' | 2025-06-19 11:04:29.172797 | orchestrator | | security_groups | name='icmp' | 2025-06-19 11:04:29.172807 | orchestrator | | | name='ssh' | 2025-06-19 11:04:29.172818 | orchestrator | | server_groups | None | 2025-06-19 11:04:29.172834 | orchestrator | | status | ACTIVE | 2025-06-19 11:04:29.172846 | orchestrator | | tags | test | 2025-06-19 11:04:29.172857 | orchestrator | | trusted_image_certificates | None | 2025-06-19 11:04:29.172868 | orchestrator | | updated | 2025-06-19T11:03:08Z | 2025-06-19 11:04:29.172884 | orchestrator | | user_id | 61e7a0efc89042dc8a9816c7cbecc487 | 2025-06-19 11:04:29.172902 | orchestrator | | volumes_attached | | 2025-06-19 11:04:29.176957 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:29.455519 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-19 11:04:32.556045 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:32.556178 | orchestrator | | Field | Value | 2025-06-19 11:04:32.556193 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:32.556205 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-19 11:04:32.556239 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-19 11:04:32.556251 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-19 11:04:32.556262 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-19 11:04:32.556273 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-19 11:04:32.556308 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-19 11:04:32.556320 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-19 11:04:32.556331 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-19 11:04:32.556362 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-19 11:04:32.556374 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-19 11:04:32.556385 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-19 11:04:32.556396 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-19 11:04:32.556407 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-19 11:04:32.556418 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-19 11:04:32.556429 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-19 11:04:32.556440 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-19T11:02:33.000000 | 2025-06-19 11:04:32.556471 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-19 11:04:32.556484 | orchestrator | | accessIPv4 | | 2025-06-19 11:04:32.556495 | orchestrator | | accessIPv6 | | 2025-06-19 11:04:32.556506 | orchestrator | | addresses | auto_allocated_network=10.42.0.31, 192.168.112.175 | 2025-06-19 11:04:32.556525 | orchestrator | | config_drive | | 2025-06-19 11:04:32.556537 | orchestrator | | created | 2025-06-19T11:02:17Z | 2025-06-19 11:04:32.556548 | orchestrator | | description | None | 2025-06-19 11:04:32.556648 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-19 11:04:32.556660 | orchestrator | | hostId | ae1922e7b5e2888078aed40eee44d9fc64261abc05e48de05a28e6fc | 2025-06-19 11:04:32.556677 | orchestrator | | host_status | None | 2025-06-19 11:04:32.556688 | orchestrator | | id | 3617e52f-7bde-445d-932d-7a2c661ba8da | 2025-06-19 11:04:32.556720 | orchestrator | | image | Cirros 0.6.2 (1536ef1a-b94d-4b4a-aaaf-036333c512dd) | 2025-06-19 11:04:32.556731 | orchestrator | | key_name | test | 2025-06-19 11:04:32.556742 | orchestrator | | locked | False | 2025-06-19 11:04:32.556752 | orchestrator | | locked_reason | None | 2025-06-19 11:04:32.556763 | orchestrator | | name | test-4 | 2025-06-19 11:04:32.556782 | orchestrator | | pinned_availability_zone | None | 2025-06-19 11:04:32.556794 | orchestrator | | progress | 0 | 2025-06-19 11:04:32.556805 | orchestrator | | project_id | e8893c1c4aed45f69d43933a71f33e8a | 2025-06-19 11:04:32.556816 | orchestrator | | properties | hostname='test-4' | 2025-06-19 11:04:32.556827 | orchestrator | | security_groups | name='icmp' | 2025-06-19 11:04:32.556844 | orchestrator | | | name='ssh' | 2025-06-19 11:04:32.556862 | orchestrator | | server_groups | None | 2025-06-19 11:04:32.556873 | orchestrator | | status | ACTIVE | 2025-06-19 11:04:32.556884 | orchestrator | | tags | test | 2025-06-19 11:04:32.556895 | orchestrator | | trusted_image_certificates | None | 2025-06-19 11:04:32.556906 | orchestrator | | updated | 2025-06-19T11:03:13Z | 2025-06-19 11:04:32.556923 | orchestrator | | user_id | 61e7a0efc89042dc8a9816c7cbecc487 | 2025-06-19 11:04:32.556934 | orchestrator | | volumes_attached | | 2025-06-19 11:04:32.562928 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-19 11:04:32.839783 | orchestrator | + server_ping 2025-06-19 11:04:32.840937 | orchestrator | ++ tr -d '\r' 2025-06-19 11:04:32.841733 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-19 11:04:35.625092 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:04:35.625221 | orchestrator | + ping -c3 192.168.112.122 2025-06-19 11:04:35.642629 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-06-19 11:04:35.642721 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=11.5 ms 2025-06-19 11:04:36.635872 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.70 ms 2025-06-19 11:04:37.636632 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=2.16 ms 2025-06-19 11:04:37.636754 | orchestrator | 2025-06-19 11:04:37.636761 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-06-19 11:04:37.636767 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:04:37.636771 | orchestrator | rtt min/avg/max/mdev = 2.161/5.445/11.480/4.272 ms 2025-06-19 11:04:37.637273 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:04:37.637282 | orchestrator | + ping -c3 192.168.112.138 2025-06-19 11:04:37.650886 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-06-19 11:04:37.650919 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=8.64 ms 2025-06-19 11:04:38.647727 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.87 ms 2025-06-19 11:04:39.648773 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=2.42 ms 2025-06-19 11:04:39.648897 | orchestrator | 2025-06-19 11:04:39.648913 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-06-19 11:04:39.649068 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:04:39.649084 | orchestrator | rtt min/avg/max/mdev = 2.420/4.646/8.644/2.833 ms 2025-06-19 11:04:39.649109 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:04:39.649122 | orchestrator | + ping -c3 192.168.112.184 2025-06-19 11:04:39.662223 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-06-19 11:04:39.662322 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=8.39 ms 2025-06-19 11:04:40.658950 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=3.22 ms 2025-06-19 11:04:41.659872 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=2.56 ms 2025-06-19 11:04:41.659988 | orchestrator | 2025-06-19 11:04:41.660004 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-06-19 11:04:41.660017 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:04:41.660028 | orchestrator | rtt min/avg/max/mdev = 2.556/4.724/8.394/2.608 ms 2025-06-19 11:04:41.660297 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:04:41.660318 | orchestrator | + ping -c3 192.168.112.175 2025-06-19 11:04:41.673146 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-06-19 11:04:41.673169 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=8.33 ms 2025-06-19 11:04:42.669398 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.35 ms 2025-06-19 11:04:43.671386 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=2.58 ms 2025-06-19 11:04:43.671708 | orchestrator | 2025-06-19 11:04:43.671732 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-06-19 11:04:43.671745 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:04:43.671756 | orchestrator | rtt min/avg/max/mdev = 2.352/4.420/8.328/2.764 ms 2025-06-19 11:04:43.671779 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:04:43.671791 | orchestrator | + ping -c3 192.168.112.112 2025-06-19 11:04:43.685780 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-06-19 11:04:43.685834 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=7.36 ms 2025-06-19 11:04:44.686642 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.60 ms 2025-06-19 11:04:45.683308 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=2.42 ms 2025-06-19 11:04:45.683408 | orchestrator | 2025-06-19 11:04:45.683423 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-06-19 11:04:45.683436 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-19 11:04:45.683448 | orchestrator | rtt min/avg/max/mdev = 2.423/4.129/7.364/2.288 ms 2025-06-19 11:04:45.684195 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-19 11:04:45.684218 | orchestrator | + compute_list 2025-06-19 11:04:45.684230 | orchestrator | + osism manage compute list testbed-node-3 2025-06-19 11:04:48.911157 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:04:48.911257 | orchestrator | | ID | Name | Status | 2025-06-19 11:04:48.911269 | orchestrator | |--------------------------------------+--------+----------| 2025-06-19 11:04:48.911310 | orchestrator | | ecb71e93-a792-43be-a498-257ed194d9a5 | test-2 | ACTIVE | 2025-06-19 11:04:48.911322 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:04:49.179664 | orchestrator | + osism manage compute list testbed-node-4 2025-06-19 11:04:52.198599 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:04:52.198756 | orchestrator | | ID | Name | Status | 2025-06-19 11:04:52.198772 | orchestrator | |--------------------------------------+--------+----------| 2025-06-19 11:04:52.198784 | orchestrator | | 3617e52f-7bde-445d-932d-7a2c661ba8da | test-4 | ACTIVE | 2025-06-19 11:04:52.198795 | orchestrator | | d3247e59-efb8-449f-8e62-aa80c907c108 | test-1 | ACTIVE | 2025-06-19 11:04:52.198806 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:04:52.475098 | orchestrator | + osism manage compute list testbed-node-5 2025-06-19 11:04:55.478605 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:04:55.478693 | orchestrator | | ID | Name | Status | 2025-06-19 11:04:55.478704 | orchestrator | |--------------------------------------+--------+----------| 2025-06-19 11:04:55.478712 | orchestrator | | 3b092941-fd16-4d00-aa15-e1e4c4093c92 | test-3 | ACTIVE | 2025-06-19 11:04:55.478718 | orchestrator | | 5c3f165d-a039-4576-9a60-63ab66a22679 | test | ACTIVE | 2025-06-19 11:04:55.478725 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:04:55.766499 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-06-19 11:04:59.151739 | orchestrator | 2025-06-19 11:04:59 | INFO  | Live migrating server 3617e52f-7bde-445d-932d-7a2c661ba8da 2025-06-19 11:05:13.190185 | orchestrator | 2025-06-19 11:05:13 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:05:15.575906 | orchestrator | 2025-06-19 11:05:15 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:05:18.123019 | orchestrator | 2025-06-19 11:05:18 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:05:20.397270 | orchestrator | 2025-06-19 11:05:20 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:05:22.904405 | orchestrator | 2025-06-19 11:05:22 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:05:25.135772 | orchestrator | 2025-06-19 11:05:25 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:05:27.531409 | orchestrator | 2025-06-19 11:05:27 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:05:29.905463 | orchestrator | 2025-06-19 11:05:29 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) completed with status ACTIVE 2025-06-19 11:05:29.905546 | orchestrator | 2025-06-19 11:05:29 | INFO  | Live migrating server d3247e59-efb8-449f-8e62-aa80c907c108 2025-06-19 11:05:43.240091 | orchestrator | 2025-06-19 11:05:43 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:05:45.565182 | orchestrator | 2025-06-19 11:05:45 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:05:48.068627 | orchestrator | 2025-06-19 11:05:48 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:05:50.407202 | orchestrator | 2025-06-19 11:05:50 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:05:52.835148 | orchestrator | 2025-06-19 11:05:52 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:05:55.159975 | orchestrator | 2025-06-19 11:05:55 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:05:57.504954 | orchestrator | 2025-06-19 11:05:57 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) completed with status ACTIVE 2025-06-19 11:05:57.751119 | orchestrator | + compute_list 2025-06-19 11:05:57.751213 | orchestrator | + osism manage compute list testbed-node-3 2025-06-19 11:06:00.687132 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:06:00.687242 | orchestrator | | ID | Name | Status | 2025-06-19 11:06:00.687257 | orchestrator | |--------------------------------------+--------+----------| 2025-06-19 11:06:00.687268 | orchestrator | | 3617e52f-7bde-445d-932d-7a2c661ba8da | test-4 | ACTIVE | 2025-06-19 11:06:00.687280 | orchestrator | | ecb71e93-a792-43be-a498-257ed194d9a5 | test-2 | ACTIVE | 2025-06-19 11:06:00.687291 | orchestrator | | d3247e59-efb8-449f-8e62-aa80c907c108 | test-1 | ACTIVE | 2025-06-19 11:06:00.687303 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:06:00.940672 | orchestrator | + osism manage compute list testbed-node-4 2025-06-19 11:06:03.383775 | orchestrator | +------+--------+----------+ 2025-06-19 11:06:03.383885 | orchestrator | | ID | Name | Status | 2025-06-19 11:06:03.383901 | orchestrator | |------+--------+----------| 2025-06-19 11:06:03.383913 | orchestrator | +------+--------+----------+ 2025-06-19 11:06:03.675662 | orchestrator | + osism manage compute list testbed-node-5 2025-06-19 11:06:06.655191 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:06:06.655992 | orchestrator | | ID | Name | Status | 2025-06-19 11:06:06.656024 | orchestrator | |--------------------------------------+--------+----------| 2025-06-19 11:06:06.656038 | orchestrator | | 3b092941-fd16-4d00-aa15-e1e4c4093c92 | test-3 | ACTIVE | 2025-06-19 11:06:06.656052 | orchestrator | | 5c3f165d-a039-4576-9a60-63ab66a22679 | test | ACTIVE | 2025-06-19 11:06:06.656065 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:06:06.900055 | orchestrator | + server_ping 2025-06-19 11:06:06.900843 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-19 11:06:06.901610 | orchestrator | ++ tr -d '\r' 2025-06-19 11:06:09.828584 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:06:09.828683 | orchestrator | + ping -c3 192.168.112.122 2025-06-19 11:06:09.843579 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-06-19 11:06:09.843663 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=11.9 ms 2025-06-19 11:06:10.835737 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.73 ms 2025-06-19 11:06:11.837664 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.97 ms 2025-06-19 11:06:11.838453 | orchestrator | 2025-06-19 11:06:11.838489 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-06-19 11:06:11.838532 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:06:11.838545 | orchestrator | rtt min/avg/max/mdev = 1.965/5.522/11.872/4.500 ms 2025-06-19 11:06:11.838574 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:06:11.838586 | orchestrator | + ping -c3 192.168.112.138 2025-06-19 11:06:11.851451 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-06-19 11:06:11.851487 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=9.31 ms 2025-06-19 11:06:12.846850 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.49 ms 2025-06-19 11:06:13.849715 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=2.17 ms 2025-06-19 11:06:13.849818 | orchestrator | 2025-06-19 11:06:13.849833 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-06-19 11:06:13.849846 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:06:13.849858 | orchestrator | rtt min/avg/max/mdev = 2.167/4.656/9.308/3.292 ms 2025-06-19 11:06:13.849871 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:06:13.849919 | orchestrator | + ping -c3 192.168.112.184 2025-06-19 11:06:13.861405 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-06-19 11:06:13.861457 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=8.48 ms 2025-06-19 11:06:14.857433 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.61 ms 2025-06-19 11:06:15.858486 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=2.19 ms 2025-06-19 11:06:15.858647 | orchestrator | 2025-06-19 11:06:15.858663 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-06-19 11:06:15.858675 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:06:15.858686 | orchestrator | rtt min/avg/max/mdev = 2.190/4.427/8.484/2.873 ms 2025-06-19 11:06:15.858698 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:06:15.858709 | orchestrator | + ping -c3 192.168.112.175 2025-06-19 11:06:15.870201 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-06-19 11:06:15.870238 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=8.32 ms 2025-06-19 11:06:16.865426 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.49 ms 2025-06-19 11:06:17.866169 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.56 ms 2025-06-19 11:06:17.866268 | orchestrator | 2025-06-19 11:06:17.866284 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-06-19 11:06:17.866298 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:06:17.866309 | orchestrator | rtt min/avg/max/mdev = 1.559/4.124/8.320/2.991 ms 2025-06-19 11:06:17.866321 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:06:17.866332 | orchestrator | + ping -c3 192.168.112.112 2025-06-19 11:06:17.879638 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-06-19 11:06:17.879668 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=9.20 ms 2025-06-19 11:06:18.874772 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.41 ms 2025-06-19 11:06:19.876385 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.86 ms 2025-06-19 11:06:19.876487 | orchestrator | 2025-06-19 11:06:19.876503 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-06-19 11:06:19.876516 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:06:19.876641 | orchestrator | rtt min/avg/max/mdev = 1.859/4.487/9.198/3.338 ms 2025-06-19 11:06:19.876654 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-06-19 11:06:22.750602 | orchestrator | 2025-06-19 11:06:22 | INFO  | Live migrating server 3b092941-fd16-4d00-aa15-e1e4c4093c92 2025-06-19 11:06:33.936946 | orchestrator | 2025-06-19 11:06:33 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:06:36.263347 | orchestrator | 2025-06-19 11:06:36 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:06:38.584931 | orchestrator | 2025-06-19 11:06:38 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:06:40.853136 | orchestrator | 2025-06-19 11:06:40 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:06:43.097203 | orchestrator | 2025-06-19 11:06:43 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:06:45.396301 | orchestrator | 2025-06-19 11:06:45 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:06:47.676462 | orchestrator | 2025-06-19 11:06:47 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:06:50.009596 | orchestrator | 2025-06-19 11:06:50 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) completed with status ACTIVE 2025-06-19 11:06:50.009768 | orchestrator | 2025-06-19 11:06:50 | INFO  | Live migrating server 5c3f165d-a039-4576-9a60-63ab66a22679 2025-06-19 11:07:00.776659 | orchestrator | 2025-06-19 11:07:00 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:07:03.093735 | orchestrator | 2025-06-19 11:07:03 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:07:05.446190 | orchestrator | 2025-06-19 11:07:05 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:07:07.788468 | orchestrator | 2025-06-19 11:07:07 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:07:10.071122 | orchestrator | 2025-06-19 11:07:10 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:07:12.323014 | orchestrator | 2025-06-19 11:07:12 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:07:14.576186 | orchestrator | 2025-06-19 11:07:14 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:07:16.868150 | orchestrator | 2025-06-19 11:07:16 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:07:19.161959 | orchestrator | 2025-06-19 11:07:19 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:07:21.484602 | orchestrator | 2025-06-19 11:07:21 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:07:23.799614 | orchestrator | 2025-06-19 11:07:23 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) completed with status ACTIVE 2025-06-19 11:07:24.039014 | orchestrator | + compute_list 2025-06-19 11:07:24.039094 | orchestrator | + osism manage compute list testbed-node-3 2025-06-19 11:07:27.152868 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:07:27.152987 | orchestrator | | ID | Name | Status | 2025-06-19 11:07:27.153002 | orchestrator | |--------------------------------------+--------+----------| 2025-06-19 11:07:27.153014 | orchestrator | | 3617e52f-7bde-445d-932d-7a2c661ba8da | test-4 | ACTIVE | 2025-06-19 11:07:27.153025 | orchestrator | | 3b092941-fd16-4d00-aa15-e1e4c4093c92 | test-3 | ACTIVE | 2025-06-19 11:07:27.153036 | orchestrator | | ecb71e93-a792-43be-a498-257ed194d9a5 | test-2 | ACTIVE | 2025-06-19 11:07:27.153047 | orchestrator | | d3247e59-efb8-449f-8e62-aa80c907c108 | test-1 | ACTIVE | 2025-06-19 11:07:27.153058 | orchestrator | | 5c3f165d-a039-4576-9a60-63ab66a22679 | test | ACTIVE | 2025-06-19 11:07:27.153081 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:07:27.403087 | orchestrator | + osism manage compute list testbed-node-4 2025-06-19 11:07:29.929199 | orchestrator | +------+--------+----------+ 2025-06-19 11:07:29.929286 | orchestrator | | ID | Name | Status | 2025-06-19 11:07:29.929296 | orchestrator | |------+--------+----------| 2025-06-19 11:07:29.929304 | orchestrator | +------+--------+----------+ 2025-06-19 11:07:30.098113 | orchestrator | + osism manage compute list testbed-node-5 2025-06-19 11:07:32.566573 | orchestrator | +------+--------+----------+ 2025-06-19 11:07:32.567436 | orchestrator | | ID | Name | Status | 2025-06-19 11:07:32.567470 | orchestrator | |------+--------+----------| 2025-06-19 11:07:32.567482 | orchestrator | +------+--------+----------+ 2025-06-19 11:07:32.748321 | orchestrator | + server_ping 2025-06-19 11:07:32.749265 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-19 11:07:32.749536 | orchestrator | ++ tr -d '\r' 2025-06-19 11:07:35.560306 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:07:35.560412 | orchestrator | + ping -c3 192.168.112.122 2025-06-19 11:07:35.572564 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-06-19 11:07:35.572601 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=8.66 ms 2025-06-19 11:07:36.568635 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.36 ms 2025-06-19 11:07:37.570247 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=2.01 ms 2025-06-19 11:07:37.571200 | orchestrator | 2025-06-19 11:07:37.571273 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-06-19 11:07:37.571289 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-19 11:07:37.571301 | orchestrator | rtt min/avg/max/mdev = 2.006/4.340/8.655/3.054 ms 2025-06-19 11:07:37.571330 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:07:37.571341 | orchestrator | + ping -c3 192.168.112.138 2025-06-19 11:07:37.586180 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-06-19 11:07:37.586227 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=9.06 ms 2025-06-19 11:07:38.580553 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.76 ms 2025-06-19 11:07:39.581769 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=1.99 ms 2025-06-19 11:07:39.581877 | orchestrator | 2025-06-19 11:07:39.581892 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-06-19 11:07:39.581904 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-19 11:07:39.581916 | orchestrator | rtt min/avg/max/mdev = 1.991/4.605/9.064/3.168 ms 2025-06-19 11:07:39.582237 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:07:39.582261 | orchestrator | + ping -c3 192.168.112.184 2025-06-19 11:07:39.595529 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-06-19 11:07:39.595563 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=9.24 ms 2025-06-19 11:07:40.590554 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.23 ms 2025-06-19 11:07:41.591977 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=2.12 ms 2025-06-19 11:07:41.592088 | orchestrator | 2025-06-19 11:07:41.592104 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-06-19 11:07:41.592117 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:07:41.592128 | orchestrator | rtt min/avg/max/mdev = 2.119/4.531/9.241/3.330 ms 2025-06-19 11:07:41.592323 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:07:41.592344 | orchestrator | + ping -c3 192.168.112.175 2025-06-19 11:07:41.604539 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-06-19 11:07:41.604621 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=6.05 ms 2025-06-19 11:07:42.602649 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.38 ms 2025-06-19 11:07:43.602562 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.90 ms 2025-06-19 11:07:43.602651 | orchestrator | 2025-06-19 11:07:43.602661 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-06-19 11:07:43.602669 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-19 11:07:43.602677 | orchestrator | rtt min/avg/max/mdev = 1.896/3.441/6.046/1.852 ms 2025-06-19 11:07:43.602904 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:07:43.602915 | orchestrator | + ping -c3 192.168.112.112 2025-06-19 11:07:43.615588 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-06-19 11:07:43.615689 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=8.07 ms 2025-06-19 11:07:44.611966 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=3.26 ms 2025-06-19 11:07:45.612866 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=2.19 ms 2025-06-19 11:07:45.612969 | orchestrator | 2025-06-19 11:07:45.612984 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-06-19 11:07:45.612997 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:07:45.613008 | orchestrator | rtt min/avg/max/mdev = 2.190/4.505/8.070/2.558 ms 2025-06-19 11:07:45.613019 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-06-19 11:07:48.842131 | orchestrator | 2025-06-19 11:07:48 | INFO  | Live migrating server 3617e52f-7bde-445d-932d-7a2c661ba8da 2025-06-19 11:08:00.508596 | orchestrator | 2025-06-19 11:08:00 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:08:02.837975 | orchestrator | 2025-06-19 11:08:02 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:08:05.120380 | orchestrator | 2025-06-19 11:08:05 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:08:07.540003 | orchestrator | 2025-06-19 11:08:07 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:08:09.866605 | orchestrator | 2025-06-19 11:08:09 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:08:12.268200 | orchestrator | 2025-06-19 11:08:12 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:08:14.526335 | orchestrator | 2025-06-19 11:08:14 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) completed with status ACTIVE 2025-06-19 11:08:14.526454 | orchestrator | 2025-06-19 11:08:14 | INFO  | Live migrating server 3b092941-fd16-4d00-aa15-e1e4c4093c92 2025-06-19 11:08:26.684876 | orchestrator | 2025-06-19 11:08:26 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:08:29.013384 | orchestrator | 2025-06-19 11:08:29 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:08:31.350862 | orchestrator | 2025-06-19 11:08:31 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:08:33.686955 | orchestrator | 2025-06-19 11:08:33 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:08:36.031211 | orchestrator | 2025-06-19 11:08:36 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:08:38.351092 | orchestrator | 2025-06-19 11:08:38 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:08:40.612162 | orchestrator | 2025-06-19 11:08:40 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) completed with status ACTIVE 2025-06-19 11:08:40.612270 | orchestrator | 2025-06-19 11:08:40 | INFO  | Live migrating server ecb71e93-a792-43be-a498-257ed194d9a5 2025-06-19 11:08:51.005370 | orchestrator | 2025-06-19 11:08:51 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:08:53.331533 | orchestrator | 2025-06-19 11:08:53 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:08:55.695398 | orchestrator | 2025-06-19 11:08:55 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:08:58.067985 | orchestrator | 2025-06-19 11:08:58 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:09:00.530392 | orchestrator | 2025-06-19 11:09:00 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:09:02.853125 | orchestrator | 2025-06-19 11:09:02 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:09:05.202614 | orchestrator | 2025-06-19 11:09:05 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) completed with status ACTIVE 2025-06-19 11:09:05.202715 | orchestrator | 2025-06-19 11:09:05 | INFO  | Live migrating server d3247e59-efb8-449f-8e62-aa80c907c108 2025-06-19 11:09:15.457642 | orchestrator | 2025-06-19 11:09:15 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:09:17.818568 | orchestrator | 2025-06-19 11:09:17 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:09:20.169919 | orchestrator | 2025-06-19 11:09:20 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:09:22.528953 | orchestrator | 2025-06-19 11:09:22 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:09:24.815920 | orchestrator | 2025-06-19 11:09:24 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:09:27.137288 | orchestrator | 2025-06-19 11:09:27 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:09:29.380862 | orchestrator | 2025-06-19 11:09:29 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:09:31.756168 | orchestrator | 2025-06-19 11:09:31 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) completed with status ACTIVE 2025-06-19 11:09:31.756272 | orchestrator | 2025-06-19 11:09:31 | INFO  | Live migrating server 5c3f165d-a039-4576-9a60-63ab66a22679 2025-06-19 11:09:42.906051 | orchestrator | 2025-06-19 11:09:42 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:09:45.273146 | orchestrator | 2025-06-19 11:09:45 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:09:47.631034 | orchestrator | 2025-06-19 11:09:47 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:09:49.883109 | orchestrator | 2025-06-19 11:09:49 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:09:52.255556 | orchestrator | 2025-06-19 11:09:52 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:09:54.621619 | orchestrator | 2025-06-19 11:09:54 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:09:57.102975 | orchestrator | 2025-06-19 11:09:57 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:09:59.460725 | orchestrator | 2025-06-19 11:09:59 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:10:01.901115 | orchestrator | 2025-06-19 11:10:01 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:10:04.227474 | orchestrator | 2025-06-19 11:10:04 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) completed with status ACTIVE 2025-06-19 11:10:04.515463 | orchestrator | + compute_list 2025-06-19 11:10:04.515550 | orchestrator | + osism manage compute list testbed-node-3 2025-06-19 11:10:07.107652 | orchestrator | +------+--------+----------+ 2025-06-19 11:10:07.107747 | orchestrator | | ID | Name | Status | 2025-06-19 11:10:07.107762 | orchestrator | |------+--------+----------| 2025-06-19 11:10:07.107773 | orchestrator | +------+--------+----------+ 2025-06-19 11:10:07.369634 | orchestrator | + osism manage compute list testbed-node-4 2025-06-19 11:10:10.395116 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:10:10.395210 | orchestrator | | ID | Name | Status | 2025-06-19 11:10:10.395224 | orchestrator | |--------------------------------------+--------+----------| 2025-06-19 11:10:10.395235 | orchestrator | | 3617e52f-7bde-445d-932d-7a2c661ba8da | test-4 | ACTIVE | 2025-06-19 11:10:10.395246 | orchestrator | | 3b092941-fd16-4d00-aa15-e1e4c4093c92 | test-3 | ACTIVE | 2025-06-19 11:10:10.395257 | orchestrator | | ecb71e93-a792-43be-a498-257ed194d9a5 | test-2 | ACTIVE | 2025-06-19 11:10:10.395296 | orchestrator | | d3247e59-efb8-449f-8e62-aa80c907c108 | test-1 | ACTIVE | 2025-06-19 11:10:10.395308 | orchestrator | | 5c3f165d-a039-4576-9a60-63ab66a22679 | test | ACTIVE | 2025-06-19 11:10:10.395332 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:10:10.724237 | orchestrator | + osism manage compute list testbed-node-5 2025-06-19 11:10:13.210542 | orchestrator | +------+--------+----------+ 2025-06-19 11:10:13.210653 | orchestrator | | ID | Name | Status | 2025-06-19 11:10:13.210668 | orchestrator | |------+--------+----------| 2025-06-19 11:10:13.210680 | orchestrator | +------+--------+----------+ 2025-06-19 11:10:13.484680 | orchestrator | + server_ping 2025-06-19 11:10:13.486757 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-19 11:10:13.486873 | orchestrator | ++ tr -d '\r' 2025-06-19 11:10:16.562572 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:10:16.562673 | orchestrator | + ping -c3 192.168.112.122 2025-06-19 11:10:16.573270 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-06-19 11:10:16.573345 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=8.32 ms 2025-06-19 11:10:17.569267 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.40 ms 2025-06-19 11:10:18.570172 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=1.75 ms 2025-06-19 11:10:18.570283 | orchestrator | 2025-06-19 11:10:18.570299 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-06-19 11:10:18.570311 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:10:18.570322 | orchestrator | rtt min/avg/max/mdev = 1.746/4.153/8.318/2.956 ms 2025-06-19 11:10:18.571174 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:10:18.571208 | orchestrator | + ping -c3 192.168.112.138 2025-06-19 11:10:18.583930 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-06-19 11:10:18.583965 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=8.27 ms 2025-06-19 11:10:19.580132 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.73 ms 2025-06-19 11:10:20.581431 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=1.97 ms 2025-06-19 11:10:20.582152 | orchestrator | 2025-06-19 11:10:20.582189 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-06-19 11:10:20.582205 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-19 11:10:20.582217 | orchestrator | rtt min/avg/max/mdev = 1.966/4.324/8.274/2.810 ms 2025-06-19 11:10:20.582230 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:10:20.582391 | orchestrator | + ping -c3 192.168.112.184 2025-06-19 11:10:20.595543 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-06-19 11:10:20.595627 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=8.45 ms 2025-06-19 11:10:21.592061 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.88 ms 2025-06-19 11:10:22.592437 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=2.10 ms 2025-06-19 11:10:22.592548 | orchestrator | 2025-06-19 11:10:22.592564 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-06-19 11:10:22.592576 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:10:22.592587 | orchestrator | rtt min/avg/max/mdev = 2.103/4.477/8.452/2.828 ms 2025-06-19 11:10:22.593239 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:10:22.593263 | orchestrator | + ping -c3 192.168.112.175 2025-06-19 11:10:22.603031 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-06-19 11:10:22.603086 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=5.39 ms 2025-06-19 11:10:23.602137 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.48 ms 2025-06-19 11:10:24.603802 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=1.90 ms 2025-06-19 11:10:24.603934 | orchestrator | 2025-06-19 11:10:24.603951 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-06-19 11:10:24.603963 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:10:24.604002 | orchestrator | rtt min/avg/max/mdev = 1.901/3.255/5.387/1.525 ms 2025-06-19 11:10:24.604014 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:10:24.604026 | orchestrator | + ping -c3 192.168.112.112 2025-06-19 11:10:24.613792 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-06-19 11:10:24.613828 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=5.00 ms 2025-06-19 11:10:25.613077 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.60 ms 2025-06-19 11:10:26.614478 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=1.83 ms 2025-06-19 11:10:26.614682 | orchestrator | 2025-06-19 11:10:26.614705 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-06-19 11:10:26.614718 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:10:26.614729 | orchestrator | rtt min/avg/max/mdev = 1.832/3.143/5.000/1.349 ms 2025-06-19 11:10:26.614751 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-06-19 11:10:29.616279 | orchestrator | 2025-06-19 11:10:29 | INFO  | Live migrating server 3617e52f-7bde-445d-932d-7a2c661ba8da 2025-06-19 11:10:39.747249 | orchestrator | 2025-06-19 11:10:39 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:10:42.039090 | orchestrator | 2025-06-19 11:10:42 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:10:44.369475 | orchestrator | 2025-06-19 11:10:44 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:10:46.674573 | orchestrator | 2025-06-19 11:10:46 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:10:49.050402 | orchestrator | 2025-06-19 11:10:49 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:10:51.428648 | orchestrator | 2025-06-19 11:10:51 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) is still in progress 2025-06-19 11:10:53.728909 | orchestrator | 2025-06-19 11:10:53 | INFO  | Live migration of 3617e52f-7bde-445d-932d-7a2c661ba8da (test-4) completed with status ACTIVE 2025-06-19 11:10:53.729012 | orchestrator | 2025-06-19 11:10:53 | INFO  | Live migrating server 3b092941-fd16-4d00-aa15-e1e4c4093c92 2025-06-19 11:11:03.500680 | orchestrator | 2025-06-19 11:11:03 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:11:05.880800 | orchestrator | 2025-06-19 11:11:05 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:11:08.217826 | orchestrator | 2025-06-19 11:11:08 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:11:10.533722 | orchestrator | 2025-06-19 11:11:10 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:11:12.794489 | orchestrator | 2025-06-19 11:11:12 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:11:15.045730 | orchestrator | 2025-06-19 11:11:15 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:11:17.340127 | orchestrator | 2025-06-19 11:11:17 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) is still in progress 2025-06-19 11:11:19.659145 | orchestrator | 2025-06-19 11:11:19 | INFO  | Live migration of 3b092941-fd16-4d00-aa15-e1e4c4093c92 (test-3) completed with status ACTIVE 2025-06-19 11:11:19.659261 | orchestrator | 2025-06-19 11:11:19 | INFO  | Live migrating server ecb71e93-a792-43be-a498-257ed194d9a5 2025-06-19 11:11:30.040034 | orchestrator | 2025-06-19 11:11:30 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:11:32.400594 | orchestrator | 2025-06-19 11:11:32 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:11:34.716044 | orchestrator | 2025-06-19 11:11:34 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:11:37.024105 | orchestrator | 2025-06-19 11:11:37 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:11:39.302345 | orchestrator | 2025-06-19 11:11:39 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:11:41.586794 | orchestrator | 2025-06-19 11:11:41 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:11:43.854236 | orchestrator | 2025-06-19 11:11:43 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) is still in progress 2025-06-19 11:11:46.132097 | orchestrator | 2025-06-19 11:11:46 | INFO  | Live migration of ecb71e93-a792-43be-a498-257ed194d9a5 (test-2) completed with status ACTIVE 2025-06-19 11:11:46.132200 | orchestrator | 2025-06-19 11:11:46 | INFO  | Live migrating server d3247e59-efb8-449f-8e62-aa80c907c108 2025-06-19 11:11:56.359175 | orchestrator | 2025-06-19 11:11:56 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:11:58.731920 | orchestrator | 2025-06-19 11:11:58 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:12:01.084650 | orchestrator | 2025-06-19 11:12:01 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:12:03.393433 | orchestrator | 2025-06-19 11:12:03 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:12:05.743026 | orchestrator | 2025-06-19 11:12:05 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:12:08.078541 | orchestrator | 2025-06-19 11:12:08 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) is still in progress 2025-06-19 11:12:10.440884 | orchestrator | 2025-06-19 11:12:10 | INFO  | Live migration of d3247e59-efb8-449f-8e62-aa80c907c108 (test-1) completed with status ACTIVE 2025-06-19 11:12:10.440988 | orchestrator | 2025-06-19 11:12:10 | INFO  | Live migrating server 5c3f165d-a039-4576-9a60-63ab66a22679 2025-06-19 11:12:20.563559 | orchestrator | 2025-06-19 11:12:20 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:12:22.891721 | orchestrator | 2025-06-19 11:12:22 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:12:25.209654 | orchestrator | 2025-06-19 11:12:25 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:12:27.487024 | orchestrator | 2025-06-19 11:12:27 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:12:29.811047 | orchestrator | 2025-06-19 11:12:29 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:12:32.179878 | orchestrator | 2025-06-19 11:12:32 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:12:34.517877 | orchestrator | 2025-06-19 11:12:34 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:12:36.745208 | orchestrator | 2025-06-19 11:12:36 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:12:39.019220 | orchestrator | 2025-06-19 11:12:39 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:12:41.395947 | orchestrator | 2025-06-19 11:12:41 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) is still in progress 2025-06-19 11:12:43.761293 | orchestrator | 2025-06-19 11:12:43 | INFO  | Live migration of 5c3f165d-a039-4576-9a60-63ab66a22679 (test) completed with status ACTIVE 2025-06-19 11:12:44.012524 | orchestrator | + compute_list 2025-06-19 11:12:44.012644 | orchestrator | + osism manage compute list testbed-node-3 2025-06-19 11:12:46.609961 | orchestrator | +------+--------+----------+ 2025-06-19 11:12:46.610123 | orchestrator | | ID | Name | Status | 2025-06-19 11:12:46.610139 | orchestrator | |------+--------+----------| 2025-06-19 11:12:46.610151 | orchestrator | +------+--------+----------+ 2025-06-19 11:12:46.872863 | orchestrator | + osism manage compute list testbed-node-4 2025-06-19 11:12:49.399753 | orchestrator | +------+--------+----------+ 2025-06-19 11:12:49.399905 | orchestrator | | ID | Name | Status | 2025-06-19 11:12:49.399921 | orchestrator | |------+--------+----------| 2025-06-19 11:12:49.399932 | orchestrator | +------+--------+----------+ 2025-06-19 11:12:49.670177 | orchestrator | + osism manage compute list testbed-node-5 2025-06-19 11:12:52.784583 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:12:52.784693 | orchestrator | | ID | Name | Status | 2025-06-19 11:12:52.784709 | orchestrator | |--------------------------------------+--------+----------| 2025-06-19 11:12:52.784720 | orchestrator | | 3617e52f-7bde-445d-932d-7a2c661ba8da | test-4 | ACTIVE | 2025-06-19 11:12:52.784731 | orchestrator | | 3b092941-fd16-4d00-aa15-e1e4c4093c92 | test-3 | ACTIVE | 2025-06-19 11:12:52.784742 | orchestrator | | ecb71e93-a792-43be-a498-257ed194d9a5 | test-2 | ACTIVE | 2025-06-19 11:12:52.784753 | orchestrator | | d3247e59-efb8-449f-8e62-aa80c907c108 | test-1 | ACTIVE | 2025-06-19 11:12:52.784764 | orchestrator | | 5c3f165d-a039-4576-9a60-63ab66a22679 | test | ACTIVE | 2025-06-19 11:12:52.784842 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-19 11:12:53.019470 | orchestrator | + server_ping 2025-06-19 11:12:53.020543 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-19 11:12:53.020576 | orchestrator | ++ tr -d '\r' 2025-06-19 11:12:55.904524 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:12:55.904632 | orchestrator | + ping -c3 192.168.112.122 2025-06-19 11:12:55.914848 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2025-06-19 11:12:55.914911 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=8.77 ms 2025-06-19 11:12:56.910758 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=3.03 ms 2025-06-19 11:12:57.911610 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=2.28 ms 2025-06-19 11:12:57.911718 | orchestrator | 2025-06-19 11:12:57.911734 | orchestrator | --- 192.168.112.122 ping statistics --- 2025-06-19 11:12:57.911747 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:12:57.911758 | orchestrator | rtt min/avg/max/mdev = 2.277/4.692/8.770/2.899 ms 2025-06-19 11:12:57.912174 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:12:57.912199 | orchestrator | + ping -c3 192.168.112.138 2025-06-19 11:12:57.928281 | orchestrator | PING 192.168.112.138 (192.168.112.138) 56(84) bytes of data. 2025-06-19 11:12:57.928374 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=1 ttl=63 time=11.3 ms 2025-06-19 11:12:58.921586 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=2 ttl=63 time=2.99 ms 2025-06-19 11:12:59.922260 | orchestrator | 64 bytes from 192.168.112.138: icmp_seq=3 ttl=63 time=2.41 ms 2025-06-19 11:12:59.922434 | orchestrator | 2025-06-19 11:12:59.922451 | orchestrator | --- 192.168.112.138 ping statistics --- 2025-06-19 11:12:59.922464 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:12:59.922476 | orchestrator | rtt min/avg/max/mdev = 2.414/5.550/11.253/4.038 ms 2025-06-19 11:12:59.922614 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:12:59.922632 | orchestrator | + ping -c3 192.168.112.184 2025-06-19 11:12:59.935391 | orchestrator | PING 192.168.112.184 (192.168.112.184) 56(84) bytes of data. 2025-06-19 11:12:59.935431 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=1 ttl=63 time=9.52 ms 2025-06-19 11:13:00.930868 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=2 ttl=63 time=2.79 ms 2025-06-19 11:13:01.932623 | orchestrator | 64 bytes from 192.168.112.184: icmp_seq=3 ttl=63 time=2.18 ms 2025-06-19 11:13:01.932728 | orchestrator | 2025-06-19 11:13:01.932744 | orchestrator | --- 192.168.112.184 ping statistics --- 2025-06-19 11:13:01.932756 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-19 11:13:01.932898 | orchestrator | rtt min/avg/max/mdev = 2.184/4.828/9.517/3.324 ms 2025-06-19 11:13:01.933006 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:13:01.933022 | orchestrator | + ping -c3 192.168.112.175 2025-06-19 11:13:01.943108 | orchestrator | PING 192.168.112.175 (192.168.112.175) 56(84) bytes of data. 2025-06-19 11:13:01.943154 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=1 ttl=63 time=6.56 ms 2025-06-19 11:13:02.941404 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=2 ttl=63 time=2.56 ms 2025-06-19 11:13:03.942339 | orchestrator | 64 bytes from 192.168.112.175: icmp_seq=3 ttl=63 time=2.14 ms 2025-06-19 11:13:03.942440 | orchestrator | 2025-06-19 11:13:03.942455 | orchestrator | --- 192.168.112.175 ping statistics --- 2025-06-19 11:13:03.942469 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:13:03.942480 | orchestrator | rtt min/avg/max/mdev = 2.135/3.751/6.555/1.990 ms 2025-06-19 11:13:03.942492 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-19 11:13:03.942503 | orchestrator | + ping -c3 192.168.112.112 2025-06-19 11:13:03.952824 | orchestrator | PING 192.168.112.112 (192.168.112.112) 56(84) bytes of data. 2025-06-19 11:13:03.952891 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=1 ttl=63 time=8.19 ms 2025-06-19 11:13:04.948867 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=2 ttl=63 time=2.38 ms 2025-06-19 11:13:05.950632 | orchestrator | 64 bytes from 192.168.112.112: icmp_seq=3 ttl=63 time=2.16 ms 2025-06-19 11:13:05.950744 | orchestrator | 2025-06-19 11:13:05.950759 | orchestrator | --- 192.168.112.112 ping statistics --- 2025-06-19 11:13:05.950821 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-19 11:13:05.950833 | orchestrator | rtt min/avg/max/mdev = 2.158/4.241/8.185/2.789 ms 2025-06-19 11:13:06.321113 | orchestrator | ok: Runtime: 0:19:26.191401 2025-06-19 11:13:06.375920 | 2025-06-19 11:13:06.376045 | TASK [Run tempest] 2025-06-19 11:13:06.911198 | orchestrator | skipping: Conditional result was False 2025-06-19 11:13:06.931205 | 2025-06-19 11:13:06.931385 | TASK [Check prometheus alert status] 2025-06-19 11:13:07.467815 | orchestrator | skipping: Conditional result was False 2025-06-19 11:13:07.470439 | 2025-06-19 11:13:07.470568 | PLAY RECAP 2025-06-19 11:13:07.470672 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-19 11:13:07.470719 | 2025-06-19 11:13:07.694549 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-19 11:13:07.696020 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-19 11:13:08.438137 | 2025-06-19 11:13:08.438315 | PLAY [Post output play] 2025-06-19 11:13:08.454641 | 2025-06-19 11:13:08.454864 | LOOP [stage-output : Register sources] 2025-06-19 11:13:08.533658 | 2025-06-19 11:13:08.533990 | TASK [stage-output : Check sudo] 2025-06-19 11:13:09.375404 | orchestrator | sudo: a password is required 2025-06-19 11:13:09.571306 | orchestrator | ok: Runtime: 0:00:00.016158 2025-06-19 11:13:09.585708 | 2025-06-19 11:13:09.585951 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-19 11:13:09.622556 | 2025-06-19 11:13:09.622807 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-19 11:13:09.693962 | orchestrator | ok 2025-06-19 11:13:09.700012 | 2025-06-19 11:13:09.700126 | LOOP [stage-output : Ensure target folders exist] 2025-06-19 11:13:10.156505 | orchestrator | ok: "docs" 2025-06-19 11:13:10.156774 | 2025-06-19 11:13:10.427266 | orchestrator | ok: "artifacts" 2025-06-19 11:13:10.675474 | orchestrator | ok: "logs" 2025-06-19 11:13:10.691545 | 2025-06-19 11:13:10.691705 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-19 11:13:10.723762 | 2025-06-19 11:13:10.723986 | TASK [stage-output : Make all log files readable] 2025-06-19 11:13:11.004998 | orchestrator | ok 2025-06-19 11:13:11.013687 | 2025-06-19 11:13:11.013894 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-19 11:13:11.048984 | orchestrator | skipping: Conditional result was False 2025-06-19 11:13:11.062973 | 2025-06-19 11:13:11.063143 | TASK [stage-output : Discover log files for compression] 2025-06-19 11:13:11.088343 | orchestrator | skipping: Conditional result was False 2025-06-19 11:13:11.099430 | 2025-06-19 11:13:11.099589 | LOOP [stage-output : Archive everything from logs] 2025-06-19 11:13:11.141607 | 2025-06-19 11:13:11.141814 | PLAY [Post cleanup play] 2025-06-19 11:13:11.149894 | 2025-06-19 11:13:11.150014 | TASK [Set cloud fact (Zuul deployment)] 2025-06-19 11:13:11.218402 | orchestrator | ok 2025-06-19 11:13:11.231001 | 2025-06-19 11:13:11.231130 | TASK [Set cloud fact (local deployment)] 2025-06-19 11:13:11.265594 | orchestrator | skipping: Conditional result was False 2025-06-19 11:13:11.282543 | 2025-06-19 11:13:11.282705 | TASK [Clean the cloud environment] 2025-06-19 11:13:12.075692 | orchestrator | 2025-06-19 11:13:12 - clean up servers 2025-06-19 11:13:12.805820 | orchestrator | 2025-06-19 11:13:12 - testbed-manager 2025-06-19 11:13:12.890181 | orchestrator | 2025-06-19 11:13:12 - testbed-node-4 2025-06-19 11:13:12.976376 | orchestrator | 2025-06-19 11:13:12 - testbed-node-5 2025-06-19 11:13:13.067655 | orchestrator | 2025-06-19 11:13:13 - testbed-node-2 2025-06-19 11:13:13.168859 | orchestrator | 2025-06-19 11:13:13 - testbed-node-1 2025-06-19 11:13:13.255778 | orchestrator | 2025-06-19 11:13:13 - testbed-node-0 2025-06-19 11:13:13.350483 | orchestrator | 2025-06-19 11:13:13 - testbed-node-3 2025-06-19 11:13:13.443890 | orchestrator | 2025-06-19 11:13:13 - clean up keypairs 2025-06-19 11:13:13.461676 | orchestrator | 2025-06-19 11:13:13 - testbed 2025-06-19 11:13:13.487160 | orchestrator | 2025-06-19 11:13:13 - wait for servers to be gone 2025-06-19 11:13:22.217193 | orchestrator | 2025-06-19 11:13:22 - clean up ports 2025-06-19 11:13:22.414286 | orchestrator | 2025-06-19 11:13:22 - 3ed8d97e-7b0b-4cd1-b614-f3205d16609e 2025-06-19 11:13:22.706854 | orchestrator | 2025-06-19 11:13:22 - 7761f150-7be1-4f4b-b8b0-f3a54e4a4fd6 2025-06-19 11:13:23.004472 | orchestrator | 2025-06-19 11:13:23 - 9222de8d-7b3b-449a-bd03-f949947ebef7 2025-06-19 11:13:23.209794 | orchestrator | 2025-06-19 11:13:23 - 9c6573a9-497a-435f-a41d-5f99630aad82 2025-06-19 11:13:23.421343 | orchestrator | 2025-06-19 11:13:23 - cfeaa5b2-ab1e-431d-979e-a8decf31e726 2025-06-19 11:13:23.678326 | orchestrator | 2025-06-19 11:13:23 - ddb6a568-8100-4956-b163-f8449f61f321 2025-06-19 11:13:23.884554 | orchestrator | 2025-06-19 11:13:23 - f1231e8d-4d5a-4329-9c3a-2915f047249c 2025-06-19 11:13:24.304535 | orchestrator | 2025-06-19 11:13:24 - clean up volumes 2025-06-19 11:13:24.414701 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-5-node-base 2025-06-19 11:13:24.453087 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-0-node-base 2025-06-19 11:13:24.492458 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-1-node-base 2025-06-19 11:13:24.536493 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-manager-base 2025-06-19 11:13:24.579339 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-2-node-base 2025-06-19 11:13:24.620684 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-4-node-base 2025-06-19 11:13:24.660359 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-3-node-base 2025-06-19 11:13:24.701194 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-8-node-5 2025-06-19 11:13:24.740076 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-3-node-3 2025-06-19 11:13:24.781195 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-1-node-4 2025-06-19 11:13:24.822290 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-0-node-3 2025-06-19 11:13:24.862572 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-6-node-3 2025-06-19 11:13:24.903869 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-2-node-5 2025-06-19 11:13:24.945991 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-4-node-4 2025-06-19 11:13:24.988262 | orchestrator | 2025-06-19 11:13:24 - testbed-volume-7-node-4 2025-06-19 11:13:25.030529 | orchestrator | 2025-06-19 11:13:25 - testbed-volume-5-node-5 2025-06-19 11:13:25.076704 | orchestrator | 2025-06-19 11:13:25 - disconnect routers 2025-06-19 11:13:25.233186 | orchestrator | 2025-06-19 11:13:25 - testbed 2025-06-19 11:13:26.562149 | orchestrator | 2025-06-19 11:13:26 - clean up subnets 2025-06-19 11:13:26.600641 | orchestrator | 2025-06-19 11:13:26 - subnet-testbed-management 2025-06-19 11:13:26.789619 | orchestrator | 2025-06-19 11:13:26 - clean up networks 2025-06-19 11:13:26.961908 | orchestrator | 2025-06-19 11:13:26 - net-testbed-management 2025-06-19 11:13:27.252953 | orchestrator | 2025-06-19 11:13:27 - clean up security groups 2025-06-19 11:13:27.295626 | orchestrator | 2025-06-19 11:13:27 - testbed-management 2025-06-19 11:13:27.409633 | orchestrator | 2025-06-19 11:13:27 - testbed-node 2025-06-19 11:13:27.520145 | orchestrator | 2025-06-19 11:13:27 - clean up floating ips 2025-06-19 11:13:27.561720 | orchestrator | 2025-06-19 11:13:27 - 81.163.192.19 2025-06-19 11:13:27.925396 | orchestrator | 2025-06-19 11:13:27 - clean up routers 2025-06-19 11:13:27.987262 | orchestrator | 2025-06-19 11:13:27 - testbed 2025-06-19 11:13:28.839100 | orchestrator | ok: Runtime: 0:00:17.206895 2025-06-19 11:13:28.843806 | 2025-06-19 11:13:28.843981 | PLAY RECAP 2025-06-19 11:13:28.844251 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-19 11:13:28.844333 | 2025-06-19 11:13:28.991566 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-19 11:13:28.992528 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-19 11:13:29.746646 | 2025-06-19 11:13:29.746886 | PLAY [Cleanup play] 2025-06-19 11:13:29.763124 | 2025-06-19 11:13:29.763268 | TASK [Set cloud fact (Zuul deployment)] 2025-06-19 11:13:29.834039 | orchestrator | ok 2025-06-19 11:13:29.844707 | 2025-06-19 11:13:29.844905 | TASK [Set cloud fact (local deployment)] 2025-06-19 11:13:29.889863 | orchestrator | skipping: Conditional result was False 2025-06-19 11:13:29.905049 | 2025-06-19 11:13:29.905199 | TASK [Clean the cloud environment] 2025-06-19 11:13:31.039121 | orchestrator | 2025-06-19 11:13:31 - clean up servers 2025-06-19 11:13:31.528799 | orchestrator | 2025-06-19 11:13:31 - clean up keypairs 2025-06-19 11:13:31.541207 | orchestrator | 2025-06-19 11:13:31 - wait for servers to be gone 2025-06-19 11:13:31.582398 | orchestrator | 2025-06-19 11:13:31 - clean up ports 2025-06-19 11:13:31.658226 | orchestrator | 2025-06-19 11:13:31 - clean up volumes 2025-06-19 11:13:31.723504 | orchestrator | 2025-06-19 11:13:31 - disconnect routers 2025-06-19 11:13:31.752007 | orchestrator | 2025-06-19 11:13:31 - clean up subnets 2025-06-19 11:13:31.771187 | orchestrator | 2025-06-19 11:13:31 - clean up networks 2025-06-19 11:13:31.927814 | orchestrator | 2025-06-19 11:13:31 - clean up security groups 2025-06-19 11:13:31.959071 | orchestrator | 2025-06-19 11:13:31 - clean up floating ips 2025-06-19 11:13:31.983380 | orchestrator | 2025-06-19 11:13:31 - clean up routers 2025-06-19 11:13:32.444068 | orchestrator | ok: Runtime: 0:00:01.312427 2025-06-19 11:13:32.447685 | 2025-06-19 11:13:32.447860 | PLAY RECAP 2025-06-19 11:13:32.447965 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-19 11:13:32.448018 | 2025-06-19 11:13:32.591381 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-19 11:13:32.592821 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-19 11:13:33.334183 | 2025-06-19 11:13:33.334342 | PLAY [Base post-fetch] 2025-06-19 11:13:33.350320 | 2025-06-19 11:13:33.350452 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-19 11:13:33.405843 | orchestrator | skipping: Conditional result was False 2025-06-19 11:13:33.420669 | 2025-06-19 11:13:33.420943 | TASK [fetch-output : Set log path for single node] 2025-06-19 11:13:33.469831 | orchestrator | ok 2025-06-19 11:13:33.478121 | 2025-06-19 11:13:33.478256 | LOOP [fetch-output : Ensure local output dirs] 2025-06-19 11:13:33.962388 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/ec99971b166f4fa8be6dbdcce14b0b3d/work/logs" 2025-06-19 11:13:34.228684 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ec99971b166f4fa8be6dbdcce14b0b3d/work/artifacts" 2025-06-19 11:13:34.516098 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/ec99971b166f4fa8be6dbdcce14b0b3d/work/docs" 2025-06-19 11:13:34.540066 | 2025-06-19 11:13:34.540302 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-19 11:13:35.518562 | orchestrator | changed: .d..t...... ./ 2025-06-19 11:13:35.518942 | orchestrator | changed: All items complete 2025-06-19 11:13:35.518996 | 2025-06-19 11:13:36.360815 | orchestrator | changed: .d..t...... ./ 2025-06-19 11:13:37.098531 | orchestrator | changed: .d..t...... ./ 2025-06-19 11:13:37.123667 | 2025-06-19 11:13:37.123885 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-19 11:13:37.161442 | orchestrator | skipping: Conditional result was False 2025-06-19 11:13:37.165477 | orchestrator | skipping: Conditional result was False 2025-06-19 11:13:37.185200 | 2025-06-19 11:13:37.185348 | PLAY RECAP 2025-06-19 11:13:37.185433 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-19 11:13:37.185476 | 2025-06-19 11:13:37.333245 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-19 11:13:37.334233 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-19 11:13:38.086952 | 2025-06-19 11:13:38.087147 | PLAY [Base post] 2025-06-19 11:13:38.104689 | 2025-06-19 11:13:38.104947 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-19 11:13:39.146156 | orchestrator | changed 2025-06-19 11:13:39.156506 | 2025-06-19 11:13:39.156651 | PLAY RECAP 2025-06-19 11:13:39.156751 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-19 11:13:39.156828 | 2025-06-19 11:13:39.278451 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-19 11:13:39.279504 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-19 11:13:40.109571 | 2025-06-19 11:13:40.109803 | PLAY [Base post-logs] 2025-06-19 11:13:40.121079 | 2025-06-19 11:13:40.121237 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-19 11:13:40.597477 | localhost | changed 2025-06-19 11:13:40.614903 | 2025-06-19 11:13:40.615143 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-19 11:13:40.654645 | localhost | ok 2025-06-19 11:13:40.660932 | 2025-06-19 11:13:40.661108 | TASK [Set zuul-log-path fact] 2025-06-19 11:13:40.682346 | localhost | ok 2025-06-19 11:13:40.703593 | 2025-06-19 11:13:40.703901 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-19 11:13:40.742658 | localhost | ok 2025-06-19 11:13:40.750030 | 2025-06-19 11:13:40.750307 | TASK [upload-logs : Create log directories] 2025-06-19 11:13:41.290336 | localhost | changed 2025-06-19 11:13:41.293372 | 2025-06-19 11:13:41.293495 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-19 11:13:41.837469 | localhost -> localhost | ok: Runtime: 0:00:00.007594 2025-06-19 11:13:41.844589 | 2025-06-19 11:13:41.844783 | TASK [upload-logs : Upload logs to log server] 2025-06-19 11:13:42.441754 | localhost | Output suppressed because no_log was given 2025-06-19 11:13:42.445028 | 2025-06-19 11:13:42.445236 | LOOP [upload-logs : Compress console log and json output] 2025-06-19 11:13:42.506929 | localhost | skipping: Conditional result was False 2025-06-19 11:13:42.514111 | localhost | skipping: Conditional result was False 2025-06-19 11:13:42.527374 | 2025-06-19 11:13:42.527843 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-19 11:13:42.580059 | localhost | skipping: Conditional result was False 2025-06-19 11:13:42.580689 | 2025-06-19 11:13:42.584289 | localhost | skipping: Conditional result was False 2025-06-19 11:13:42.591591 | 2025-06-19 11:13:42.591865 | LOOP [upload-logs : Upload console log and json output]